text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Maor_Farid#cite_note-14] | [TOKENS: 1458] |
Contents Maor Farid Dr. Maor Farid (Hebrew: מאור פריד; born April 20, 1992) is an Israeli scientist, engineer and artificial intelligence researcher at Massachusetts Institute of Technology, social activist, and author. He is the founder and CEO of Learn to Succeed (Hebrew: ללמוד להצליח) for empowering of youths from the Israeli socio-economic periphery and youths at risk, a regional manager of the Israeli center of ScienceAbroad at MIT, and an activist in the American Technion Society. He is an alumnus of Unit 8200, and a fellow of Fulbright Program and the Israel Scholarship Educational Foundation [he]. Dr. Farid was elected to the Forbes 30 Under 30 list of 2019, and won the Moskowitz Prize for Zionism. Early life Maor was born in Ness Ziona, a city in central Israel, as the eldest son for parents from immigrating families of Mizrahi Jews from Iraq and Libya. Maor suffered from Attention deficit hyperactivity disorder (ADHD) from a young age, and was classified as a problematic and violent student. His ADHD issues were diagnosed only after he began his university studies. However, inspired by his parents' background, he aspired to excel at school for a better future for his family. During elementary school, Maor attended local quizzes about Jewish history and Zionism, which significantly shaped his identity and national perspective. Farid graduated high school with the highest GPA in school. Later he was recruited to the Israel Defense Forces and drafted to the Brakim Program [he] – an excellence program of the Israeli Intelligence Corps for training leading R&D officers for the Israeli military and defense industry. Maor graduated the program with honors and was elected by the Israeli Prime Minister's Office and Unit 8200, where he served as an artificial intelligence researcher, officer, and commander. During his Military service, he received various honors and awards, such as the Excellent Scientist Award, given to the top three academics serving in the Israel Defense Forces. In 2019, Farid completed his military service in the rank of a Captain. Education and academic career As part of the (4 years) Brakim Program, Maor completed his Bachelor's and Master's degrees at the Technion in Mechanical Engineering with honors. Then, he initiated his Ph.D. research as a collaboration with the Israel Atomic Energy Commission (IAEC) in parallel to his duty military service. The main goals of his Ph.D. research were predicting irreversible effects of major earthquakes on Israel's nuclear facilities, and improving their seismic resistance using energy absorption technologies. The mathematical models developed by Farid were able to forecast earthquake effects on facilities with major hazard potential, and predicted the failure of liquid storage tanks due to earthquakes took place in Italy (2012) and Mexico (2017). The energy absorption technologies used, increased in up to 90% the seismic resistance abilities of those sensitive facilities. The research results were published in multiple papers in peer-reviewed academic journals and presented in international academic conferences. Later, this research expanded to an official collaboration between the Technion and the Shimon Peres Negev Nuclear Research Center, which aims to implement the findings obtained on existing sensitive systems, and won funding of 1.5 million NIS from the Pazy foundation of the Israel Atomic Energy Commission and the Council for Higher Education. In 2017, Farid completed his Ph.D. and as the youngest graduate at the Technion for that year, at the age of 24. In the graduation ceremonies, he honored his parents to receive the diplomas on his behalf. At the same year, he served as a lecturer at Ben-Gurion University in an original course he developed as a solution for knowledge gaps he identified in the Israeli defense industry. In 2018, Dr. Farid served as an artificial intelligence researcher at a Data Science team of Unit 8200, where he developed machine learning-based solutions for military and operational needs. In 2019, Farid won the Fulbright and the Israel Scholarship Educational Foundation scholarships, and was accepted to post-doctoral position at Massachusetts Institute of Technology where he develops real-time methods for predicting earthquake effects using machine learning techniques. In 2020, Farid was accepted to the Emerging Leaders Program at Harvard Kennedy School in Cambridge, Massachusetts. At the same year, he received the excellence research grant of the Israel Academy of Sciences and Humanities for leading his research in collaboration between MIT and the Technion. Social activism Farid social activism focuses on empowering youths from disadvantaged backgrounds from an early age. In 2010–2015, he served as a mentor of a robotics team from Dimona in FIRST Robotics Competition, a mathematics tutor in "Aharai!" [he] program for high-school students at risk in Dimona and Be'er Sheva, and a mentor and private tutor of adolescence and reserve duty soldiers from disadvantaged backgrounds. In 2010, he initiated "Learn to Succeed" (Hebrew: ללמוד להצליח) project, for mitigating the social gaps in the Israeli society by empowering youths from the social, economical, and geographical periphery for excellence, self-fulfillment and gaining formal education. In 2018, Learn to Succeed became an official non-profit organization. At the same year, Farid led a crowdfunding project of 150,000 NIS in order to expand the organization to a national scale. In 2019, he published the book "Learn to Succeed", in which he describes his struggle with ADHD, the violent environment in which he grew up, and the changing process he went through from being a violent teenager to becoming the youngest Ph.D. graduate at the Technion. The book was given to more than two thousand youths at risk and became a top seller in Israel shortly after its publication. Maor dedicated the book to his parents and to the memorial of his friend Captain Tal Nachman who was killed in operational activity during his military service in 2014. The organization consists of hundreds of volunteers, gives full scholarships to STEM students from the periphery who serve as mentors of youths, both Jews and Arabs, from disadvantaged backgrounds, runs a hotline which gives online practical and mental support to hundreds of youths, parents and educators, initiates inspirational activities with military orientation to increase the motivation of its teen-age members for significant military service, and gives inspirational lectures to more than 5,000 youths each year. In 2019, Maor initiated a collaboration with Unit 8200 in which tens of the program's members are being interviewed to the unit. This opportunity is usually given to students with the highest grades in the matriculate exams in each class. In 2020, Dr. Farid established the ScienceAbroad center at MIT, aiming to strengthen the connections between Israeli researchers in the institute and the state of Israel. Moreover, he serves as a volunteer in the American Technion Society. Honors and awards Personal life Farid is married to Michal. Interviews and articles References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Subrahmanyan_Chandrasekhar] | [TOKENS: 4432] |
Contents Subrahmanyan Chandrasekhar Subrahmanyan Chandrasekhar (/ˌtʃəndrəˈʃeɪkər/ CHƏN-drə-SHAY-kər; Tamil: சுப்பிரமணியன் சந்திரசேகர், romanized: Cuppiramaṇiyaṉ Cantiracēkar; 19 October 1910 – 21 August 1995) was an Indian-American theoretical physicist. He shared the 1983 Nobel Prize in Physics "for his theoretical studies of the physical processes of importance to the structure and evolution of the stars." His mathematical treatment of stellar evolution yielded many of the current theoretical models of the later evolutionary stages of massive stars. The Chandrasekhar limit describes the maximum mass of a white dwarf (1.44 solar masses). Above it, a stellar remnant will collapse to form a neutron star or black hole. Many concepts, institutions and inventions, including the Chandra X-Ray Observatory, are named after him. Born in the late British Raj, Chandrasekhar worked on a wide variety of problems in physics during his lifetime, contributing to the contemporary understanding of stellar structure, white dwarfs, stellar dynamics, stochastic process, radiative transfer, the quantum theory of the hydrogen anion, hydrodynamic and hydromagnetic stability, turbulence, equilibrium and the stability of ellipsoidal figures of equilibrium, general relativity, mathematical theory of black holes and theory of colliding gravitational waves. At the University of Cambridge, he developed a theoretical model explaining the structure of white dwarf stars that took into account the relativistic variation of mass with the velocities of electrons that comprise their degenerate matter. Chandrasekhar revised the models of stellar dynamics first outlined by Jan Oort and others by considering the effects of fluctuating gravitational fields within the Milky Way on stars rotating about the galactic center. His solution to this complex dynamical problem involved a set of twenty partial differential equations, describing a new quantity he termed "dynamical friction", which has the dual effects of decelerating the star and helping to stabilize clusters of stars. Chandrasekhar extended this analysis to the interstellar medium, showing that clouds of galactic gas and dust are distributed very unevenly. Chandrasekhar studied at Presidency College, Madras (now Chennai) and the University of Cambridge. A long-time professor at the University of Chicago, he did some of his studies at the Yerkes Observatory, and served as editor of The Astrophysical Journal from 1952 to 1971. He was on the faculty at Chicago from 1937 until his death in 1995 at the age of 84, and was the Morton D. Hull Distinguished Service Professor of Theoretical Astrophysics. Early life and education Chandrasekhar was born in Lahore on 19 October 1910 of the British Raj (present-day at Pakistan) into a Tamil Brahmin family, to Sita Balakrishnan (1891–1931) and Chandrasekhara Subrahmanya Ayyar (1885–1960) who was stationed in Lahore as Deputy Auditor General of the Northwestern Railways at the time of Chandrasekhar's birth. Chandra, as Chandrasekhar was known, had two elder sisters, Rajalakshmi and Balaparvathi, three younger brothers, Vishwanathan, Balakrishnan, and Ramanathan, and four younger sisters, Sarada, Vidya, Savitri, and Sundari. His paternal uncle was the Indian physicist and Nobel laureate Chandrasekhara Venkata Raman. His mother was devoted to intellectual pursuits, had translated Henrik Ibsen's A Doll's House into Tamil and is credited with arousing Chandra's intellectual curiosity at an early age. The family moved from Lahore to Allahabad in 1916, and finally settled in Madras in 1918. Chandrasekhar was tutored at home until the age of 12. In middle school his father taught him mathematics and physics and his mother taught him Tamil. He later attended the Hindu High School, Triplicane, Madras during the years 1922–25. Subsequently, he studied at Presidency College, Madras (affiliated to the University of Madras) from 1925 to 1930, writing his first paper, "The Compton Scattering and the New Statistics", in 1929 after being inspired by a lecture by Arnold Sommerfeld. He obtained his bachelor's degree, BSc (Hon.), in physics, in June 1930. In July 1930, Chandrasekhar was awarded a Government of India scholarship to pursue graduate studies at the University of Cambridge, where he was admitted to Trinity College, secured by R. H. Fowler with whom he communicated his first paper. During his travels to England, Chandrasekhar spent his time working out the statistical mechanics of the degenerate electron gas in white dwarf stars, providing relativistic corrections to Fowler's previous work (see Legacy below). University of Cambridge In his first year at Cambridge, as a research student of Fowler, Chandrasekhar spent his time calculating mean opacities and applying his results to the construction of an improved model for the limiting mass of a degenerate star. At the meetings of the Royal Astronomical Society, he met E. A. Milne. At the invitation of Max Born he spent the summer of 1931, his second year of post-graduate studies, at Born's institute at Göttingen, working on opacities, atomic absorption coefficients, and model stellar photospheres. On the advice of Paul Dirac, he spent his final year of graduate studies (September 1932 to May 1933) at the Institute for Theoretical Physics in Copenhagen, where he met Niels Bohr. At Copenhagen, he became friends with Victor Weisskopf, Léon Rosenfeld, George Placzek and Max Delbrück as they were all living in the same pension. After receiving a bronze medal for his work on degenerate stars, Chandrasekhar was awarded his PhD degree at Cambridge in the summer of 1933, with a thesis on rotating self-gravitating polytropes. On 9 October, he was elected to a Prize Fellowship at Trinity College for the period 1933–1937, becoming only the second Indian to receive a Trinity Fellowship after Srinivasa Ramanujan 16 years earlier. He had been so certain of failing to obtain the fellowship that he had already made arrangements to study under Milne that autumn at Oxford, even going to the extent of renting a flat there. In 1934, Chandrasekhar visited Leningrad, Russia and met other astrophysicists including Viktor Ambartsumian and Nikolai Aleksandrovich Kozyrev. He also met Lev Landau. During this time, Chandrasekhar became acquainted with British physicist Sir Arthur Eddington. Eddington took an interest in his work, but in January, 1935, gave a talk severely criticizing Chandrasekhar's work (see #Dispute with Eddington and Chandrasekhar–Eddington dispute). Career and research In 1935, Chandrasekhar was invited by the director of the Harvard Observatory, Harlow Shapley, to be a visiting lecturer in theoretical astrophysics for a three-month period. He travelled to the United States in December. During his visit to Harvard, Chandrasekhar greatly impressed Shapley, but declined his offer of a Harvard research fellowship. At the same time, Chandrasekhar met Gerard Kuiper, a noted Dutch astrophysical observationalist who was then a leading authority on white dwarfs. Kuiper had recently been recruited by Otto Struve, the director of the Yerkes Observatory in Williams Bay, Wisconsin, which was run by the University of Chicago, and the university's president, Robert Maynard Hutchins. Having known of Chandrasekhar, Struve was then considering him for one of three faculty posts in astrophysics, along with Kuiper; the other opening had been filled by Bengt Stromgren, a Danish theorist. Following a recommendation from Kuiper, Struve invited Chandrasekhar to Yerkes in March 1936 and offered him the job. Though Chandrasekhar was keenly interested, he initially declined the offer and left for England; after Hutchins sent a radiogram to Chandrasekhar during the voyage, he finally accepted, returning to Yerkes as an assistant professor of Theoretical Astrophysics in December 1936. Hutchins also intervened on an occasion where Chandra's participation on teaching a course organised by Struve, was vetoed by the dean Henry Gale based on a racial prejudice; Hutchins said "By all means have Mr. Chandrasekhar teach". Chandrasekhar remained at the University of Chicago for the rest of his career, 1937-1995. He was promoted to associate professor in 1941 and to full professor two years later at the age of 33. In 1946, when Princeton University offered Chandrasekhar a position vacated by Henry Norris Russell with a salary double that of Chicago's, Hutchins incremented his salary matching with that of Princeton's and persuaded Chandrasekhar to stay in Chicago. In 1952, he became the Morton D. Hull Distinguished Service Professor of Theoretical Astrophysics and Enrico Fermi Institute, upon Enrico Fermi's invitation. In 1953, he and his wife, Lalitha Chandrasekhar, took American citizenship. After the Laboratory for Astrophysics and Space Research (LASR) was built by NASA in 1966 at the university, Chandrasekhar occupied one of the four corner offices on the second floor. (The other corners housed John A. Simpson, Peter Meyer, and Eugene N. Parker.) Chandrasekhar lived at 4800 Lake Shore Drive after the high-rise apartment complex was built in the late 1960s, and later at 5550 Dorchester Building. After graduating from Cambridge, Chandrasekhar, who was in close contact with Arthur Eddington, presented a full solution to his stellar equation at the Royal Astronomical Society meeting in 1935. Eddington booked a talk right after Chandrasekhar, where he openly criticized Chandrasekhar's theory. This depressed Chandrasekhar and sparked a scientific dispute. Eddington refused to accept a limit for the mass of a star and proposed an alternative model. Chandrasekhar sought support from prominent physicists like Léon Rosenfeld, Niels Bohr and Christian Møller who found Eddington's arguments lacking. The tension persisted through 1930s, as Eddington continued to openly criticize Chandrasekhar during meetings and the two compared each other's theories in publications. Chandrasekhar ultimately completed his theory of white dwarfs in 1939, receiving praise from others in the field. Eddington died in 1944, and despite their disagreements, Chandrasekhar continued to state that he admired Eddington and considered him a friend. During World War II, Chandrasekhar worked at the Ballistic Research Laboratory at the Aberdeen Proving Ground in Maryland. While there, he worked on problems of ballistics, resulting in reports such as 1943's On the decay of plane shock waves, Optimum height for the bursting of a 105mm shell, On the Conditions for the Existence of Three Shock Waves, On the Determination of the Velocity of a Projectile from the Beat Waves Produced by Interference with the Waves of Modified Frequency Reflected from the Projectile and The normal reflection of a blast wave. Chandrasekhar's expertise in hydrodynamics led Robert Oppenheimer to invite him to join the Manhattan Project at Los Alamos, but delays in the processing of his security clearance prevented him from contributing to the project. It has been rumoured that he visited the Calutron project. He wrote that his scientific research was motivated by his desire to participate in the progress of different subjects in science to the best of his ability, and that the prime motive underlying his work was systematization. "What a scientist tries to do essentially is to select a certain domain, a certain aspect, or a certain detail, and see if that takes its appropriate place in a general scheme which has form and coherence; and, if not, to seek further information which would help him to do that". Chandrasekhar developed a unique style of mastering several fields of physics and astrophysics; consequently, his working life can be divided into distinct periods. He would exhaustively study a specific area, publish several papers in it and then write a book summarizing the major concepts in the field. He would then move on to another field for the next decade and repeat the pattern. Thus he studied stellar structure, including the theory of white dwarfs, during the years 1929 to 1939, and subsequently focused on stellar dynamics, theory of Brownian motion from 1939 to 1943. Next, he concentrated on the theory of radiative transfer and the quantum theory of the negative ion of hydrogen from 1943 to 1950. This was followed by sustained work on turbulence and hydrodynamic and hydromagnetic stability from 1950 to 1961. In the 1960s, he studied both the equilibrium and the stability of ellipsoidal figures of equilibrium, and general relativity. During the period, 1971 to 1983 he studied the mathematical theory of black holes, and, finally, during the late 80s, he worked on the theory of colliding gravitational waves. Chandra worked closely with his students and expressed pride in the fact that over a 50-year period (from roughly 1930 to 1980), the average age of his co-author collaborators had remained the same, at around 30. He insisted that students address him as "Prof. Chandrasekhar" until they received their PhD degree, after which time they (as other colleagues) were encouraged to address him as "Chandra". When Chandrasekhar was working at the Yerkes Observatory in 1940s, he would drive 150 miles (240 km) to and from every weekend to teach a course at the University of Chicago. Two of the students who took the course, Tsung-Dao Lee and Chen-Ning Yang, won the Nobel prize before he could get one for himself. Regarding classroom interactions during his lectures, astronomer Carl Sagan stated from firsthand experience that "frivolous questions" from unprepared students were "dealt with in the manner of a summary execution", while questions of merit "were given serious attention and response". Sagan recalled "I learned what true mathematical elegance is from Subrahmanyan Chandrasekhar." From 1952 to 1971 Chandrasekhar was editor of The Astrophysical Journal. When Eugene Parker submitted a paper on his discovery of solar wind in 1957, two eminent reviewers rejected the paper. However, since Chandra as an editor could not find any mathematical flaws in Parker's work, he went ahead and published the paper in 1958. During the years 1990 to 1995, Chandrasekhar worked on a project devoted to explaining the detailed geometric arguments in Sir Isaac Newton's Philosophiae Naturalis Principia Mathematica using the language and methods of ordinary calculus. The effort resulted in the book Newton's Principia for the Common Reader, published in 1995. Chandrasekhar also worked on collision of gravitational waves, and algebraically special perturbations. He had a strong interest in literature and the arts. In 1975, he lectured on patterns of creativity in Shakespeare, Beethoven and Newton. Personal life Chandrasekhar was the nephew of C. V. Raman, who was awarded the Nobel Prize for Physics in 1930. Chandrasekhar married Lalitha Doraiswamy in September 1936. He met her as a fellow student at Presidency College. He became a naturalised citizen of the U.S. in 1953. Many considered him as warm, positive, generous, unassuming, meticulous, and open to debate, while some others as private, intimidating, impatient and stubborn regarding non-scientific matters, and unforgiving to those who ridiculed his work. Chandrasekhar was a vegetarian. Chandrasekhar died of a heart attack at the University of Chicago Hospital in 1995, having survived a prior heart attack in 1975. He was survived by his wife, who died on 2 September 2013 at the age of 102. She was a serious student of literature and western classical music. Once when involved in a discussion about the Bhagavad Gita, Chandrasekhar said: "I should like to preface my remarks with a personal statement in order that my later remarks will not be misunderstood. I consider myself an atheist". This was also confirmed many times in his other talks. Kameshwar C. Wali quoted him saying: "I am not religious in any sense; in fact, I consider myself an atheist." In an interview with Kevin Krisciunas at the University of Chicago, on 6 October 1987, Chandrasekhar commented: "Of course, he (Otto Struve) knew I was an atheist, and he never brought up the subject with me". In deference to Subrahmanyan Chandrasekhar's atheistic views, his wife refrained from displaying the small religious icons of deities she had brought with her. Awards, honours and legacy Chandrasekhar was awarded half of the Nobel Prize in Physics in 1983 for his studies on the physical processes important to the structure and evolution of stars. Chandrasekhar accepted this honour, but was upset the citation mentioned only his earliest work, seeing it as a denigration of a lifetime's achievement. He shared it with William A. Fowler. Chandrasekhar's most notable work is on the astrophysical Chandrasekhar limit. The limit gives the maximum mass of a white dwarf star, ~1.44 solar masses, or equivalently, the minimum mass that must be exceeded for a star to collapse into a neutron star or black hole (following a supernova). The limit was first calculated by Chandrasekhar in 1930 during his maiden voyage from India to Cambridge, England for his graduate studies. In 1979, NASA named the third of its four "Great Observatories" after Chandrasekhar. This followed a naming contest which attracted 6,000 entries from fifty states and sixty-one countries. The Chandra X-ray Observatory was launched and deployed by Space Shuttle Columbia on 23 July 1999. The Chandrasekhar number, an important dimensionless number of magnetohydrodynamics, is named after him. The asteroid 1958 Chandra is also named after Chandrasekhar. The Himalayan Chandra Telescope is named after him. In the Biographical Memoirs of Fellows of the Royal Society of London, R. J. Tayler wrote: "Chandrasekhar was a classical applied mathematician whose research was primarily applied in astronomy and whose like will probably never be seen again." Chandrasekhar supervised 45 PhD students. After his death, his wife Lalitha Chandrasekhar made a gift of his Nobel Prize money to the University of Chicago towards the establishment of the Subrahmanyan Chandrasekhar Memorial Fellowship. First awarded in the year 2000, this fellowship is given annually to an outstanding applicant to graduate school in the PhD programs of the department of physics or the department of astronomy and astrophysics. S. Chandrasekhar Prize of Plasma Physics is an award given by Association of Asia Pacific Physical Societies (AAPS) to outstanding plasma physicists, started in the year 2014. The Chandra Astrophysics Institute (CAI) is a program offered for high school students who are interested in astrophysics mentored by MIT scientists and sponsored by the Chandra X-ray Observatory. In 2010, on account of Chandra's 100th birthday, University of Chicago conducted a symposium titled Chandrasekhar Centennial Symposium 2010 which was attended by leading astrophysicists such as Roger Penrose, Kip Thorne, Freeman Dyson, Jayant V. Narlikar, Rashid Sunyaev, G. Srinivasan, and Clifford Will. Its research talks were published in 2011 as a book titled Fluid flows to Black Holes: A tribute to S Chandrasekhar on his birth centenary. On 19 October 2017, Chandrasekhar was celebrated in a Google Doodle honouring his 107th birthday. Publications Chandrasekhar published around 380 papers in his lifetime. He wrote his first paper in 1928 when he was still an undergraduate student about Compton effect and last paper which was accepted for publication just two months before his death was in 1995 which was about non-radial oscillation of stars. The University of Chicago Press published selected papers of Chandrasekhar in seven volumes. References Sources Further reading Obituaries External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Display_advertising] | [TOKENS: 2164] |
Contents Digital display advertising Digital display advertising is online graphic advertising through banners, text, images, video, and audio. The main purpose of digital display advertising is to post company ads on third-party websites. A display ad is usually interactive (i.e. clickable), which allows brands and advertisers to engage deeper with the users. A display ad can also be a companion ad for a non-clickable video ad. According to eMarketer, Facebook and Twitter were set to take 33 percent of display ad spending market share by 2017. Desktop display advertising eclipsed search ad buying in 2014, with mobile ad spending overtaking display in 2015. Overview Digital display advertising is an online form of advertising in which the company's promotional messages appear on third-party sites or search engine results pages such as publishers or social networks. There is an evidence showing that this advertising can increase the number of website page view of a company from most types of customers except from the non-authenticated visitors who visited the website before. The main purpose of display advertising is to support brand awareness (Robinson et al., 2007) and it also helps to increase the purchase intention of consumers. Social media is used by many organizations. One example is, in 2014, ASOS and Nike collaborated with Google Hangouts to create the first shoppable video web chat on Google+. The video is an example of display advertising used for commemorating 27 years of Nike's Air Max shoes. The video advertising aimed at creating brand awareness among users and convincing them to watch the Hangout and purchase products from the display advertising itself. Consumers were able to shop by clicking the display advertising. According to an ASOS statement, display advertising has contributed to an increase in both the number of users visiting its website and downloads of the ASOS App by 28 percent, with users having then visited the website eight times a month, on average. History Since the early advent of technology, the Internet has completely changed the way people relate to advertisements. As computers prices decreased, online content became accessible to a large portion of the world's population. This change has modified the way people are exposed to media and advertising and has led to the creation of online channels through which advertisements can reach users. The first type of relationship between a website and an advertiser was a straightforward, direct partnership. This partnership model implies that the advertiser promoting a product or service pays the website (also known as a publisher) directly for a certain amount of ad impressions. The first digital ad, called a banner ad, was run in 1994 by AT&T on a site called HotWired. As time went on, publishers began creating thousands of websites, leading to millions of pages with unsold ad space. This gave rise to a new set of companies called Ad Networks. The ad network acted as a broker, buying unsold ad space from multiple publishers and packaged them into audiences to be sold to advertisers. This second wave of advertiser-publisher relationships rapidly gained popularity as it was convenient and useful for buyers who often found themselves paying a lower price yet receiving enhanced targeting capabilities through ad networks. The third and most recent major development that shaped the advertiser-publisher ecosystem started occurring in the late 2000s when widespread adoption of RTB (real time bidding) technology took place. Also referred to as programmatic bidding, RTB allowed companies representing buyers and sellers to bid on the price to show an ad to a user every time a banner ad is loading. When a page loads during a user visit, there are thousands of bids occurring from advertisers to serve an ad to that user, based on each company's individual algorithms. With this most recent change in the industry, more and more ads are being sold on a single-impression basis, as opposed to in bulk purchases. Programmatic display advertising, or real time bidding (RTB), transformed the way digital display advertising is bought and managed in recent years. Rather than placing a booking for advertising directly with a website, advertisers will manage their activity through a (demand side platform), and bid to advertise to people in real time, across multiple websites, based on targeting criteria. This method of advertising quickly gained popularity, as it allows for more control for the advertiser (or agency), including of the individual target audience, rather than just the website. It has become a threat to website operators and generally the cost paid for advertising in this way is less than the old method and so the earning potential for them is reduced.[citation needed] Programmatic is not without its drawbacks, as without the appropriate management adverts can appear against unsavoury content or inappropriate news topics. This issue became front-page news in February 2017, when advertisers on YouTube were found displayed on terror group websites and fake news sites. As a result, a number of major advertisers paused all of their online advertising until they could put the appropriate measures in place to prevent this occurring again. Other issues can arise from this method of buying display ads, for example since DSPs mostly buy from inventory on the public ad exchanges, the quality of the impressions bought can often be questionable and low value. in response to this, in the past few years we have seen the proliferation and use of private deals through PMPs. The birthday of the first banner display on the World Wide Web was on 27 October 1994. It appeared on HotWired, the first commercial web magazine. The COCONET online service had graphical online banner ads starting in 1988 in San Diego, California.[citation needed] The Prodigy service, launched also in 1988, also had banner ads.[citation needed] Operations The accounts department meet with the client to define campaign goals and translate those goals into a creative brief to be forwarded to the creative department. The role of the creative team is to conceptualise and create the advert. They have to develop a creative execution that will be compelling enough to drive a customer to buy a product or a service. The team often consists of a mix of copy writers and graphic designers who use their respective skillsets to communicate via copy and visuals. People have to test in which way the user experiences all the information of a data visualization. For this reason, they have to study the users' response to sounds, image, and motion. They have to be aware of everything that is digitally consumed, to know all the newest technologies and media solutions, and to help all the other departments to find the best way to reach the object's campaign. Ad Operations, or 'Ad ops', are the people who ensure that the ad is physically delivered to the correct website at the correct time. They do this by uploading the ad into the advertiser's ad server so that it can be delivered to the website and displayed to the end user who will see it. They are also responsible of delivering 100% of the advertiser's budget in an ad campaign by regularly tracking the ad campaign performance and optimizing it towards the advertiser's KPIs. Ad servers helps manage digital display advertisements. It is an advertising technology (ad tech) tool that, throughout a platform, administrates the ads and their distribution. It is basically a service or technology for a company that takes care of all the ad campaign programs and by receiving the ad files it is able to allocate them in different websites. The ad server is responsible for things such as the dates by which the campaign has to run on a website; the rapidity in which an ad as to be spread and where (geographic location targeting, language targeting.. ); controlling that an ad is not overseen by a user by limiting the number of visualisations; proposing an ad on past behaviour targeting. There are different types of ad servers. There is an ad server for publishers that helps them to launch a new ad on a website by listing the highest ads' price on its and to follow the ad's growth by registering how many users it has reached. There is an ad server for advertisers that helps them by sending the ads in the form of HTML codes to each publisher. In this way, it is possible to open the ad in every moment and make changes of frequency for example, at all times. Lastly, there is an ad server for ad networks that provides information as in which network the publisher is registering an income and which is the daily revenue. Two students of the "Amsterdam school of Communication Research ASCor" have run studies about the audience reactions to different display advertising formats. In particular, they took into consideration two different types of format (sponsored content and banner advertising) to demonstrate that people react and perceive formats in different ways, positive and negative. For this reason, it is important to choose the right format because it will help to make the most of the medium. It is also possible to add: To help to better select the right format for the type of ad, Interactive Advertising Bureau has realized a Display Standard Ad Unit Portfolio that works as a guideline that can be followed by the creatives. The IAB ad sizes as of 2007 are : Those sizes that are bold above are part of the Universal Ad Package.Grayed entries were delisted after the update in 2011. Standard banner ad sizes are constantly evolving due to consumer creative fatigue and banner blindness. Ad companies consistently test performance of ad units to ensure maximum performance for their clients. IAB has updated its guideline bi-annually. In 2015, IAB announced advertising creative guidelines for display & mobile, considering HTML5. In 2017 IAB also introduced the new guidelines, featuring adjustable ad formats, as well as the guidelines for new digital content experiences such as augmented reality (AR), virtual reality (VR), social media, mobile video, emoji ad messaging, and 360-degree video ads. Fixed size ad spec is: Display advertising Fatigue is the term used to describe the state in which consumers lose their sensitivity to or responsiveness to display advertising as a result of prolonged exposure to them. This may lead to poorer click-through rates (CTR), lower engagement rates, and a general decline in the efficacy of ad campaigns. Digital advertising strategies that largely rely on Re-targeting or repetitive exposure across websites and platforms are more likely to face the issue of display ad fatigue. Most common example of ad fatigue is Youtube Ads, Published by Google Ads Advertisers around the world. Overexposure: If users see the same advertisements too often, they may grow irritated or stop noticing them altogether, and the ads become less effective. Lack of Variety: Users may become disinterested if they always encounter the same messaging, creatives and ad formats. Inadequate Targeting: Users may become disengaged or uninterested in ads that are not tailored or relevant to them. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Diffusion_of_innovations] | [TOKENS: 5747] |
Contents Diffusion of innovations Diffusion of innovations is a theory that seeks to explain how, why, and at what rate new ideas and technology spread. The theory was popularized by Everett Rogers in his book Diffusion of Innovations, first published in 1962. Rogers argues that diffusion is the process by which an innovation is communicated through certain channels over time among the participants in a social system. The origins of the diffusion of innovations theory are varied and span multiple disciplines. Rogers proposes that five main elements influence the spread of a new idea: the innovation itself, adopters, communication channels, time, and a social system. This process relies heavily on social capital. The innovation must be widely adopted in order to self-sustain. Within the rate of adoption, there is a point at which an innovation reaches critical mass. In 1989, management consultants working at the consulting firm Regis McKenna, Inc. theorized that this point lies at the boundary between the early adopters and the early majority. This gap between niche appeal and mass (self-sustained) adoption was originally labeled "the marketing chasm". The categories of adopters are innovators, early adopters, early majority, late majority, and laggards. Diffusion manifests itself in different ways and is highly subject to the type of adopters and innovation-decision process. The criterion for the adopter categorization is innovativeness, defined as the degree to which an individual adopts a new idea. History The concept of diffusion was first studied by the French sociologist Gabriel Tarde in late 19th century and by German and Austrian anthropologists and geographers such as Friedrich Ratzel and Leo Frobenius. The study of diffusion of innovations took off in the subfield of rural sociology in the midwestern United States in the 1920s and 1930s. Agriculture technology was advancing rapidly, and researchers started to examine how independent farmers were adopting hybrid seeds, equipment, and techniques. A study of the adoption of hybrid corn seed in Iowa by Ryan and Gross (1943) solidified the prior work on diffusion into a distinct paradigm that would be cited consistently in the future. Since its start in rural sociology, Diffusion of Innovations has been applied to numerous contexts, including medical sociology, communications, marketing, development studies, health promotion, organizational studies, knowledge management, conservation biology and complexity studies, with a particularly large impact on the use of medicines, medical techniques, and health communications. In organizational studies, its basic epidemiological or internal-influence form was formulated by H. Earl Pemberton, such as postage stamps and standardized school ethics codes. In 1962, Everett Rogers, a professor of rural sociology at Ohio State University, published his seminal work: Diffusion of Innovations. Rogers synthesized research from over 508 diffusion studies across the fields that initially influenced the theory: anthropology, early sociology, rural sociology, education, industrial sociology and medical sociology. Rogers applied it to the healthcare setting to address issues with hygiene, cancer prevention, family planning, and drunk driving. Using his synthesis, Rogers produced a theory of the adoption of innovations among individuals and organizations. Diffusion of Innovations and Rogers' later books are among the most often cited in diffusion research. His methodologies are closely followed in recent diffusion research, even as the field has expanded into, and been influenced by, other methodological disciplines such as social network analysis and communication. Elements The key elements in diffusion research are: Studies have explored many characteristics of innovations. Meta-reviews have identified several characteristics that are common among most studies. These are in line with the characteristics that Rogers initially cited in his reviews. Rogers describes five characteristics that potential adopters evaluate when deciding whether to adopt an innovation (his 'ACCORD' model): These qualities interact and are judged as a whole. For example, an innovation might be extremely complex, reducing its likelihood to be adopted and diffused, but it might be very compatible with a large advantage relative to current tools. Even with this high learning curve, potential adopters might adopt the innovation anyway. Studies also identify other characteristics of innovations, but these are not as common as the ones that Rogers lists above. The fuzziness of the boundaries of the innovation can impact its adoption. Specifically, innovations with a small core and large periphery are easier to adopt. Innovations that are less risky are easier to adopt as the potential loss from failed integration is lower. Innovations that are disruptive to routine tasks, even when they bring a large relative advantage, might not be adopted because of added instability. Likewise, innovations that make tasks easier are likely to be adopted. Closely related to relative complexity, knowledge requirements are the ability barrier to use presented by the difficulty to use the innovation. Even when there are high knowledge requirements, support from prior adopters or other sources can increase the chances for adoption. Like innovations, adopters have been determined to have traits that affect their likelihood to adopt an innovation. A bevy of individual personality traits have been explored for their impacts on adoption, but with little agreement. Ability and motivation, which vary on situation unlike personality traits, have a large impact on a potential adopter's likelihood to adopt an innovation. Unsurprisingly, potential adopters who are motivated to adopt an innovation are likely to make the adjustments needed to adopt it. Motivation can be impacted by the meaning that an innovation holds; innovations can have symbolic value that encourage (or discourage) adoption. First proposed by Ryan and Gross (1943), the overall connectedness of a potential adopter to the broad community represented by a city. Potential adopters who frequent metropolitan areas are more likely to adopt an innovation. Finally, potential adopters who have the power or agency to create change, particularly in organizations, are more likely to adopt an innovation than someone with less power over his choices. Complementary to the diffusion framework, behavioral models such as Technology acceptance model (TAM) and Unified theory of acceptance and use of technology (UTAUT) are frequently used to understand individual technology adoption decisions in greater details. Organizations face more complex adoption possibilities because organizations are both the aggregate of its individuals and its own system with a set of procedures and norms. Three organizational characteristics match well with the individual characteristics above: tension for change (motivation and ability), innovation-system fit (compatibility), and assessment of implications (observability). Organizations can feel pressured by a tension for change. If the organization's situation is untenable, it will be motivated to adopt an innovation to change its fortunes. This tension often plays out among its individual members. Innovations that match the organization's pre-existing system require fewer coincidental changes and are easy to assess and more likely to be adopted. The wider environment of the organization, often an industry, community, or economy, exerts pressures on the organization, too. Where an innovation is diffusing through the organization's environment for any reason, the organization is more likely to adopt it. Innovations that are intentionally spread, including by political mandate or directive, are also likely to diffuse quickly. h individual decisions where behavioral models (e.g. TAM and UTAUT) can be used to complement the diffusion framework and reveal further details, these models are not directly applicable to organizational decisions. However, research suggested that simple behavioral models can still be used as a good predictor of organizational technology adoption when proper initial screening procedures are introduced. Process Diffusion occurs through a five–step decision-making process. It occurs through a series of communication channels over a period of time among the members of a similar social system. Ryan and Gross first identified adoption as a process in 1943. Rogers' five stages (steps): awareness, interest, evaluation, trial, and adoption are integral to this theory. An individual might reject an innovation at any time during or after the adoption process. Abrahamson examined this process critically by posing questions such as: How do technically inefficient innovations diffuse and what impedes technically efficient innovations from catching on? Abrahamson makes suggestions for how organizational scientists can more comprehensively evaluate the spread of innovations. In later editions of Diffusion of Innovation, Rogers changes his terminology of the five stages to: knowledge, persuasion, decision, implementation, and confirmation. However, the descriptions of the categories have remained similar throughout the editions. Decisions Two factors determine what type a particular decision is: Based on these considerations, three types of innovation-decisions have been identified. Rate of adoption The rate of adoption is defined as the relative speed at which participants adopt an innovation. Rate is usually measured by the length of time required for a certain percentage of the members of a social system to adopt an innovation. The rates of adoption for innovations are determined by an individual's adopter category. In general, individuals who first adopt an innovation require a shorter adoption period (adoption process) when compared to late adopters. Within the adoption curve at some point the innovation reaches critical mass. This is when the number of individual adopters ensures that the innovation is self-sustaining. Rogers outlines several strategies in order to help an innovation reach this stage, including when an innovation adopted by a highly respected individual within a social network and creating an instinctive desire for a specific innovation. Another strategy includes injecting an innovation into a group of individuals who would readily use said technology, as well as providing positive reactions and benefits for early adopters. Adoption is an individual process detailing the series of stages one undergoes from first hearing about a product to finally adopting it. Diffusion signifies a group phenomenon, which suggests how an innovation spreads. Adopter categories Rogers defines an adopter category as a classification of individuals within a social system on the basis of innovativeness. In the book Diffusion of Innovations, Rogers suggests a total of five categories of adopters in order to standardize the usage of adopter categories in diffusion research. The adoption of an innovation follows an S curve when plotted over a length of time. The categories of adopters are innovators, early adopters, early majority, late majority and laggards. In addition to the gatekeepers and opinion leaders who exist within a given community, change agents may come from outside the community. Change agents bring innovations to new communities – first through the gatekeepers, then through the opinion leaders, and so on through the community. Failed diffusion Failed diffusion does not mean that the technology was adopted by no one. Rather, failed diffusion often refers to diffusion that does not reach or approach 100% adoption due to its own weaknesses, competition from other innovations, or simply a lack of awareness. From a social networks perspective, a failed diffusion might be widely adopted within certain clusters but fail to make an impact on more distantly related people. Networks that are over-connected might suffer from a rigidity that prevents the changes an innovation might bring, as well. Sometimes, some innovations also fail as a result of lack of local involvement and community participation. For example, Rogers discussed a situation in Peru involving the implementation of boiling drinking water to improve health and wellness levels in the village of Los Molinos. The residents had no knowledge of the link between sanitation and illness. The campaign worked with the villagers to try to teach them to boil water, burn their garbage, install latrines and report cases of illness to local health agencies. In Los Molinos, a stigma was linked to boiled water as something that only the "unwell" consumed, and thus, the idea of healthy residents boiling water prior to consumption was frowned upon. The two-year educational campaign was considered to be largely unsuccessful. This failure exemplified the importance of the roles of the communication channels that are involved in such a campaign for social change. An examination of diffusion in El Salvador determined that there can be more than one social network at play as innovations are communicated. One network carries information and the other carries influence. While people might hear of an innovation's uses, in Rogers' Los Molinos sanitation case, a network of influence and status prevented adoption. Heterophily and communication channels Lazarsfeld and Merton first called attention to the principles of homophily and its opposite, heterophily. Using their definition, Rogers defines homophily as "the degree to which pairs of individuals who interact are similar in certain attributes, such as beliefs, education, social status, and the like". When given the choice, individuals usually choose to interact with someone similar to themselves. Homophilous individuals engage in more effective communication because their similarities lead to greater knowledge gain as well as attitude or behavior change. As a result, homophilous people tend to promote diffusion among each other. However, diffusion requires a certain degree of heterophily to introduce new ideas into a relationship; if two individuals are identical, no diffusion occurs because there is no new information to exchange. Therefore, an ideal situation would involve potential adopters who are homophilous in every way, except in knowledge of the innovation.In the field of innovation studies, network analysis has been increasingly applied to investigate how patterns of collaboration and connectivity influence the diffusion of new ideas and technologies. Recent contributions include *Elements of Network Science: Theory, Methods and Applications in Stata, R and Python* by Antonio Zinilli (2025), which integrates theoretical and applied perspectives on network science with a focus on socio-economic systems and innovation processes. Empirical studies by Zinilli and co-authors have examined the structural dynamics of inter-city innovation networks in China using temporal exponential random graph models (TERGM), highlighting the role of network evolution in shaping innovation flows, as well as the configuration of innovation networks among European city-regions, investigating whether they function as exclusive clubs or inclusive hubs. Promotion of healthy behavior provides an example of the balance required of homophily and heterophily. People tend to be close to others of similar health status. As a result, people with unhealthy behaviors like smoking and obesity are less likely to encounter information and behaviors that encourage good health. This presents a critical challenge for health communications, as ties between heterophilous people are relatively weaker, harder to create, and harder to maintain. Developing heterophilous ties to unhealthy communities can increase the effectiveness of the diffusion of good health behaviors. Once one previously homophilous tie adopts the behavior or innovation, the other members of that group are more likely to adopt it, too. Role of social systems Not all individuals exert an equal amount of influence over others. In this sense opinion leaders are influential in spreading either positive or negative information about an innovation. Rogers relies on the ideas of Katz & Lazarsfeld and the two-step flow theory in developing his ideas on the influence of opinion leaders. Opinion leaders have the most influence during the evaluation stage of the innovation-decision process and on late adopters. In addition opinion leaders typically have greater exposure to the mass media, more cosmopolitan, greater contact with change agents, more social experience and exposure, higher socioeconomic status, and are more innovative than others. Research was done in the early 1950s at the University of Chicago attempting to assess the cost-effectiveness of broadcast advertising on the diffusion of new products and services. The findings were that opinion leadership tended to be organized into a hierarchy within a society, with each level in the hierarchy having most influence over other members in the same level, and on those in the next level below it. The lowest levels were generally larger in numbers and tended to coincide with various demographic attributes that might be targeted by mass advertising. However, it found that direct word of mouth and example were far more influential than broadcast messages, which were only effective if they reinforced the direct influences. This led to the conclusion that advertising was best targeted, if possible, on those next in line to adopt, and not on those not yet reached by the chain of influence. Research on actor-network theory (ANT) also identifies a significant overlap between the ANT concepts and the diffusion of innovation which examine the characteristics of innovation and its context among various interested parties within a social system to assemble a network or system which implements innovation. Other research relating the concept to public choice theory finds that the hierarchy of influence for innovations need not, and likely does not, coincide with hierarchies of official, political, or economic status. Elites are often not innovators, and innovations may have to be introduced by outsiders and propagated up a hierarchy to the top decision makers. Prior to the introduction of the Internet, it was argued that social networks had a crucial role in the diffusion of innovation particularly tacit knowledge in the book The IRG Solution – hierarchical incompetence and how to overcome it. The book argued that the widespread adoption of computer networks of individuals would lead to much better diffusion of innovations, with greater understanding of their possible shortcomings and the identification of needed innovations that would not have otherwise occurred. The social model proposed by Ryan and Gross is expanded by Valente who uses social networks as a basis for adopter categorization instead of solely relying on the system-level analysis used by Ryan and Gross. Valente also looks at an individual's personal network, which is a different application than the organizational perspective espoused by many other scholars. Recent research by Wear shows, that particularly in regional and rural areas, significantly more innovation takes place in communities which have stronger inter-personal networks. Innovations are often adopted by organizations through two types of innovation-decisions: collective innovation decisions and authority innovation decisions. The collective decision occurs when adoption is by consensus. The authority decision occurs by adoption among very few individuals with high positions of power within an organization. Unlike the optional innovation decision process, these decision processes only occur within an organization or hierarchical group. Research indicated that, with proper initial screening procedures, even simple behavioral model can serve as a good predictor for technology adoption in many commercial organizations. Within an organization certain individuals are termed "champions" who stand behind an innovation and break through opposition. The champion plays a very similar role as the champion used within the efficiency business model Six Sigma. The process contains five stages that are slightly similar to the innovation-decision process that individuals undertake. These stages are: agenda-setting, matching, redefining/restructuring, clarifying and routinizing. Extensions of the theory Diffusion of Innovations has been applied beyond its original domains. In the case of political science and administration, policy diffusion focuses on how institutional innovations are adopted by other institutions, at the local, state, or country level. An alternative term is 'policy transfer' where the focus is more on the agents of diffusion and the diffusion of policy knowledge, such as in the work of Diane Stone. Specifically, policy transfer can be defined as "knowledge about how policies administrative arrangements, institutions, and ideas in one political setting (past or present) is used in the development of policies, administrative arrangements, institutions, and ideas in another political setting". The first interests with regards to policy diffusion were focused in time variation or state lottery adoption, but more recently interest has shifted towards mechanisms (emulation, learning and coercion) or in channels of diffusion where researchers find that regulatory agency creation is transmitted by country and sector channels. At the local level, examining popular city-level policies make it easy to find patterns in diffusion through measuring public awareness. At the international level, economic policies have been thought to transfer among countries according to local politicians' learning of successes and failures elsewhere and outside mandates made by global financial organizations. As a group of countries succeed with a set of policies, others follow, as exemplified by the deregulation and liberalization across the developing world after the successes of the Asian Tigers. The reintroduction of regulations in the early 2000s also shows this learning process, which would fit under the stages of knowledge and decision, can be seen as lessons learned by following China's successful growth. Peres, Muller and Mahajan suggested that diffusion is "the process of the market penetration of new products and services that is driven by social influences, which include all interdependencies among consumers that affect various market players with or without their explicit knowledge". Eveland evaluated diffusion from a phenomenological view, stating, "Technology is information, and exists only to the degree that people can put it into practice and use it to achieve values". Diffusion of existing technologies has been measured using "S curves". These technologies include radio, television, VCR, cable, flush toilet, clothes washer, refrigerator, home ownership, air conditioning, dishwasher, electrified households, telephone, cordless phone, cellular phone, per capita airline miles, personal computer and the Internet. These data can act as a predictor for future innovations. Diffusion curves for infrastructure reveal contrasts in the diffusion process of personal technologies versus infrastructure. The scientific literature identifies in the works of Rogers (1976) and Mahajan & Muller (1994) an intrinsic aspect of the theory of diffusion of innovations: the focus on the use of data on products, ideas/patents, and technologies. As the development of this research paradigm was paved mainly by studies on the diffusion of products and/or services (Im, Mason & Houston, 2007; Lassar, Manolis & Lassar, 2005), little was structured to consider a theme as a basis for studies on diffusion, as in the work of Takahashi, Figueiredo & Favaretto (2023) who analyzed the diffusion of Deep Learning in BRICS and OECD countries using data from Google Trends. Consequences of adoption Both positive and negative outcomes are possible when an individual or organization chooses to adopt a particular innovation. Rogers states that this area needs further research because of the biased positive attitude that is associated with innovation. Rogers lists three categories for consequences: desirable vs. undesirable, direct vs. indirect, and anticipated vs. unanticipated. In contrast Wejnert details two categories: public vs. private and benefits vs. costs. Public consequences comprise the impact of an innovation on those other than the actor, while private consequences refer to the impact on the actor. Public consequences usually involve collective actors, such as countries, states, organizations or social movements. The results are usually concerned with issues of societal well-being. Private consequences usually involve individuals or small collective entities, such as a community. The innovations are usually concerned with the improvement of quality of life or the reform of organizational or social structures. Benefits of an innovation obviously are the positive consequences, while the costs are the negative. Costs may be monetary or nonmonetary, direct or indirect. Direct costs are usually related to financial uncertainty and the economic state of the actor. Indirect costs are more difficult to identify. An example would be the need to buy a new kind of pesticide to use innovative seeds. Indirect costs may also be social, such as social conflict caused by innovation. Marketers are particularly interested in the diffusion process as it determines the success or failure of a new product. It is quite important for a marketer to understand the diffusion process so as to ensure proper management of the spread of a new product or service. The diffusion of innovations theory has been used to conduct research on the unintended consequences of new interventions in public health. In the book multiple examples of the unintended negative consequences of technological diffusion are given. The adoption of automatic tomato pickers developed by Midwest agricultural colleges led to the adoption of harder tomatoes (disliked by consumers) and the loss of thousands of jobs leading to the collapse of thousands of small farmers. In another example, the adoption of snowmobiles in Saami reindeer herding culture is found to lead to the collapse of their society with widespread alcoholism and unemployment for the herders, ill-health for the reindeer (such as stress ulcers, miscarriages) and a huge increase in inequality. Mathematical treatment The diffusion of an innovation typically follows an S-shaped curve which often resembles a logistic function. Roger's diffusion model concludes that the popularity of a new product will grow with time to a saturation level and then decline, but it cannot predict how much time it will take and what the saturation level will be. Bass (1969) and many other researchers proposed modeling the diffusion based on parametric formulas to fill this gap and to provide a means for a quantitative forecast of adoption timing and levels. The Bass model focuses on the first two (Introduction and Growth). Some of the Bass-Model extensions present mathematical models for the last two (Maturity and Decline). MS-Excel or other tools can be used to solve the Bass model equations, and other diffusion models equations, numerically. Mathematical programming models such as the S-D model apply the diffusion of innovations theory to real data problems. In addition to that, agent-based models follow a more intuitive process by designing individual-level rules to model the diffusion of ideas and innovations. Complex network models can also be used to investigate the spread of innovations among individuals connected to each other by a network of peer-to-peer influences, such as in a physical community or neighborhood. Such models represent a system of individuals as nodes in a network (or graph). The interactions that link these individuals are represented by the edges of the network and can be based on the probability or strength of social connections. In the dynamics of such models, each node is assigned a current state, indicating whether or not the individual has adopted the innovation, and model equations describe the evolution of these states over time. In threshold models, the uptake of technologies is determined by the balance of two factors: the (perceived) usefulness (sometimes called utility) of the innovation to the individual as well as barriers to adoption, such as cost. The multiple parameters that influence decisions to adopt, both individual and socially motivated, can be represented by such models as a series of nodes and connections that represent real relationships. Borrowing from social network analysis, each node is an innovator, an adopter, or a potential adopter. Potential adopters have a threshold, which is a fraction of his neighbors who adopt the innovation that must be reached before he will adopt. Over time, each potential adopter views his neighbors and decides whether he should adopt based on the technologies they are using. When the effect of each individual node is analyzed along with its influence over the entire network, the expected level of adoption was seen to depend on the number of initial adopters and the network's structure and properties. Two factors emerge as important to successful spread of the innovation: the number of connections of nodes with their neighbors and the presence of a high degree of common connections in the network (quantified by the clustering coefficient). These models are particularly good at showing the impact of opinion leaders relative to others. Computer models are often used to investigate this balance between the social aspects of diffusion and perceived intrinsic benefit to the individuals. Criticism Even though there have been more than four thousand articles across many disciplines published on Diffusion of Innovations, with a vast majority written after Rogers created a systematic theory, there have been few widely adopted changes to the theory. Although each study applies the theory in slightly different ways, critics say this lack of cohesion has left the theory stagnant and difficult to apply with consistency to new problems. Diffusion is difficult to quantify because humans and human networks are complex. It is extremely difficult, if not impossible, to measure what exactly causes adoption of an innovation. This variety of variables has also led to inconsistent results in research, reducing heuristic value. Compared to other modes of diffusion in natural sciences, diffusion models of innovation also lack a clear understanding of the spatial structure on which innovation is propagated. Product management can shape the topology of the diffusion space in numerous different ways by the means of segmentation, product portfolios, and lifecycle management. Rogers placed the contributions and criticisms of diffusion research into four categories: pro-innovation bias, individual-blame bias, recall problem, and issues of equality. The pro-innovation bias, in particular, implies that all innovation is positive and that all innovations should be adopted. Cultural traditions and beliefs can be consumed by another culture's through diffusion, which can impose significant costs on a group of people. The one-way information flow, from sender to receiver, is another weakness of this theory. The message sender has a goal to persuade the receiver, and there is little to no reverse flow. The person implementing the change controls the direction and outcome of the campaign. In some cases, this is the best approach, but other cases require a more participatory approach. In complex environments where the adopter is receiving information from many sources and is returning feedback to the sender, a one-way model is insufficient and multiple communication flows need to be examined. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Morvarc%27h] | [TOKENS: 3049] |
Contents Morvarc'h Morvarc'h (Breton for "sea horse") is the name of a fabulous horse of Breton legend found in two folktales reworked in the 19th and 20th centuries. Though its name appears in older sources, it was invented or reinterpreted by Charles Guyot, who named it Morvark in his version of the legend of the city of Ys in 1926. It belongs to the "Queen of the North" Malgven, who gives it to her husband King Gradlon. Endowed with the ability to gallop on the waves, Morvarc'h is described as having a black coat and as breathing flames through its nostrils. It also appears in a Breton folktale about King Marc'h of Cornouaille. In the course of a deer hunt it is killed by its own rider's arrow, which has been turned around by the spell of Dahud, the daughter of Malgven. She then puts the ears of the horse Morvarc'h on the head of King Marc'h, who seeks in vain to hide them. The legend of Morvarc'h being from Cornouaille in Brittany, it is the subject of equestrian statues in the town of Argol and in Saint Corentin's Cathedral in Quimper. Folklore connects it with the village of Pouldreuzic. Linked to the water like many Celtic horses, Morvarc'h reappears in more recent works composed around the legend of the drowned city of Ys, among which are novels by Gordon Zola [fr], André Le Ruyet and Suzanne Salmon [fr], and a song by Dan Ar Braz. Etymology The name Morvarc'h means "sea horse" or "marine horse" in Breton. It appears in Grégoire de Rostrenen's [fr] dictionary, published in 1732. This name causes confusion in the Breton language, because depending on the case, it can also mean "walrus" or "whale": Françoise Le Roux [fr] and Christian Guyonvarc'h translate the Morvarc'h of Charles Guyot as "morse" ("walrus"), a name they find "incongruous to designate a fiery stallion". In Breton legends The horse Morvarc'h appears in two Breton legends reworked in the 19th and 20th centuries: that of the city of Ys with Malgven and Gradlon, and that of Marc'h, King of Cornouaille. This name is also mentioned in Barzaz Breiz, without any apparent link to the other two stories. Théodore Hersart de la Villemarqué mentions a "sea horse" (morvarc'h) in the Barzaz Breiz (1840). This horse is a warrior symbol, as evidenced by the bard Gwenc'hlan in his prophecy likening it to the king: "Morvark, this is the master I have chosen. Take us over the sea to his ships. You are more rapid than the wind, you laugh at the waves, you outstrip the storms, the sea-eagle wears out his tireless wings pursuing you. Morvarc'h also figures in recent versions of the legend of the city of Ys, featuring King Gradlon, his wife Queen Malgven, their daughter Dahut and the evangelist Saint Guénolé, who is trying to persuade Gradlon to put an end to the pagan machinations of his daughter. There is a possible mention of Morvar'ch, not named, in a poem which Théodore Hersart de La Villemarqué presents as dating from the 13th century. Without having a specific name, the mount of King Grallon loses its master during an attempt to escape by swimming; the master drowns, and the horse runs wild: His destrier, when he escaped him from the perilous river, grieved greatly for his master's loss. He sought again the mighty forest, yet never was at rest by night or day. No peace might he find but ever pawed he with his hoofs upon the ground, and neighed so loudly that the noise went through all the country round about.The Lay of Graelent, translated by Eugene Mason This poem, presented by La Villemarqué as a medieval lai by Marie de France, is better categorized as an Arthurian tale of courtly love. Contrary to popular belief, our detailed picture of Morvarc'h (here named "Morvark") comes mostly from a modern reworking of the legend, written by Charles Guyot in 1926, which is clearly influenced by the Romantics of 19th century. The legend is not fixed, and many hagiographized and folklorized versions circulate. According to the Celticists Françoise Le Roux and Christian-Joseph Guyonvarc'h this constitutes a "catastrophe for legend" and seriously complicates the search for elements deriving from Celtic sources. Indeed, according to Thierry Jigourel [fr], this horse is an invention of Charles Guyot.[Note 1] In the version of Guyot's The Legend of the City of Ys published in 1926 by H. Piazza, this horse is "a supernatural mount worthy of a god, born of a siren and an undine, offered by the genii of the sea to King Harold, ageing husband of Malgven". However, only Queen Malgven can tame it. It is described as black, crosses the ramparts without reins or bridle "as easily as the hedge of an orchard", and "flies over the frothy sea". Malgven calls him the "horse of the night". During a war expedition, King Gradlon of Cornouaille is abandoned by his army, while he besieges a fortress built by the side of a fjord. Left alone and pacing the foot of the ramparts to find a way to enter, one evening he meets a woman who seems to wait for him. This is Malgven, the queen of the "North", who tells him that she has observed him since the beginning of the siege and that she loves him.[Note 2] She enables him to enter the citadel and leads him to the royal chamber where her husband sleeps. Gradlon kills him and seizes the treasure. To return to Cornouaille he mounts an enchanted horse, Morvark, who can run on the ocean, and rides him with Malgven. At the end of a day's ride, the lovers join the Breton fleet. A year elapses before they return to Brittany, then Malgven dies giving birth to a girl, Dahud. On the death of his mistress, Morvark emits a whinny "as mournful as a human sob" and begins to weep. After Malgven's death it is King Gradlon who rides him, while his daughter Dahut has a "flame-coloured hackney". The horse reappears during the flooding of the city: "Morvark, the gallant steed, swam tirelessly shoreward; through flooded crossroads, through streets in torrents, he galloped, lighter than air ". Gradlon carries his daughter on Morvarc'h, but: Barely does he stay on the horse, the latter bends as if three heavily armed men were riding it; then the ocean reaches it, embraces it, suddenly reaches as far as its hocks; and Gradlon feels his knees cold, his fingers grasping Morvark's mane. The noble animal strikes the sea with its powerful hooves; his chest boldly divides the swell, like the bow of a ship under the steady pull of the oars; it neighs with pride and rage, and raising its double burden, shakes its wet mane. Meanwhile the water licks its sweating flanks, penetrates into its smoking nostrils; it engulfs the riders to the waist. When Morvark is about to sink into the waves, Guénolé touches Dahut's shoulder with the tip of his staff and drops it into the water, allowing Morvark to rise to the surface. In a tale collected in the valley of the Aulne by Yann ar Floc'h in 1905, Morvarc'h is also the name of the fabulous horse that belongs to another king, Marc'h de Poulmarc'h (or Portzmarc'h, Plomarc'h), near Douarnenez. Passionate about hunting, the king fails to catch a doe on his fabulous horse. Only on the edge of the cliff, near where Ys was sunk, is he facing her. He aims his bow and shoots an arrow that magically turns around and kills his horse. He rushes to the doe to finish it with his dagger, but she has disappeared and in her place is a beautiful girl. It is Dahud (Ahès), the daughter of Gradlon and Malgven. Before returning to the sea, she affixes horse's ears to Marc'h. He tries to hide this, and in the process he kills all the barbers of the kingdom who discover his secret until there remains only one, whom he tells to say nothing, under pain of death. The barber cannot hold his peace any longer, and divulges to a handful of reeds that "King Marc'h has the ears of the horse Morvarc'h". The reeds are harvested and bagpipes are made from them, but in a burst of music the bagpipes reveal the king's secret, making the whole kingdom of Brittany aware. Yann Brékilien adds that this horse is "silver shoed", and runs so lightly "that his feet do not leave marks on the moor". Analysis The horse Morvarc'h is an indispensable adjunct in the story of the legend of the city of Ys. Like many other horses of Breton legend, it is linked, etymologically and symbolically, to water and the sea. Stories of horses crossing the sea (often having some of the characteristics of psychopomps) exist in Celtic mythology, and there are many instances in popular Celtic traditions of maleficent horses who come from the water. The horse competes with the boat of Charon, the ferryman of the dead, in this role of psychopomp crossing the water. For the esoteric author Robert-Jacques Thibaud, who cites Morvarc'h as the first example, "the horse represents the primordial ocean". The storyteller Yann Brekilien identifies the horse of Gradlon with that of King Marc'h, and describes it as having a black mane and as "galloping as well on water as on land". For Gaël Milin [fr], although the tale of King Marc'h is often close to that of King Midas with his donkey ears,[Note 3] the analogy stops there since the equine ears of Marc'h are probably a mark of the legitimacy of his sovereignty. This trope of the horse ears appears from the 12th century in a work of Béroul, The Romance of Tristan. Morvarc'h supposedly left a hoofprint in the municipality of Pouldreuzic, according to Pierre-Jakez Hélias; the horse would have stepped on shore coming out of the water with Gradlon on his back, after the drowning of the city of Ys. In the visual arts The first possible representations of Morvarc'h are ancient, there having been a lead statue from the 15th century of Gradlon on his horse between the two spires of Saint Corentin Cathedral in Quimper.[Note 4] It was destroyed by the sans-culottes on 12 December 1793, during the French Revolution, with other art objects considered royalist. A new statue, this time in granite, designed by the architect Joseph Bigot, was based on a fragment of the old one. It was created by the sculptors Amédée Ménard and Le Brun, and was placed in the same spot as the old one in 1858. Morvarc'h also appears on a painting by Évariste-Vital Luminais, The Flight of King Gradlon, painted around 1884 and kept at the Musée des Beaux-Arts de Quimper. This painting itself inspired another equestrian sculpture in granite, made by Patrig Ar Goarnig, in the municipality of Argol. It represents the horse Morvarc'h ridden by Gradlon. On each side of the statue is a version of the legend of the city of Ys, a pagan and a Christian. The Christian version is the one that is most commonly told, the pagan version has Dahut managing to flee with her son on the back of Morvarc'h, while Gradlon is in the water and shouts to his daughter to stay with him. Another equestrian statue stands on the pediment of the triumphal arch of the Church of St. Peter and St. Paul in Argol. The poet Arthur Rimbaud parodied the legend in one of his letters, with a drawing titled "The Sledge", in which Malgven is riding a sledge pulled by a schoolboy who fears he might see it overturn. In literature and music The horse Morvarc'h has made appearances in various novels, and also in music. Dan Ar Braz gave the title Morvac'h (cheval de la mer) to the sixth track of his 1977 album Douar Nevez. Novels which include the legend of the city of Ys are mostly pseudo-historical tributes to the legends of Brittany. Morvarc'h gives its name to André Le Ruyet's book, Morvarc'h cheval de mer (1999, reissued 2011), which tells of the travels of Philippe, a Parisian who discovers the wonders of Celtic legend. It is mentioned in Gordon Zola's [fr] parody, La Dérive des incontinents: "Having no boat at their disposal, Grallon the Breton and Malgven stole and rode off on Morvarc'h, the magic horse of the queen – Morvarc'h, which is to the sea what the morbac'h is to fleece, means "sea horse" – It was a beautiful marine steed – as black as the bottom of a moonless night and endowed with nostrils that spit fire. It is found in Ce soir à Cornebise, a novel by Suzanne Salmon [fr] where six holidaymakers practise spiritualism and contact the spirit of Dahut, one of them being a reincarnation of King Gradlon. See also Notes Footnotes References See also |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Culture] | [TOKENS: 6022] |
Contents Culture Culture (/ˈkʌltʃər/ KUL-chər) is a concept that encompasses the social behavior, institutions, and norms found in human societies, as well as the knowledge, beliefs, arts, laws, customs, capabilities, attitudes, and habits of the individuals in these groups. Culture often originates from or is attributed to a specific region or location. Humans acquire culture through the learning processes of enculturation and socialization, which is shown by the diversity of cultures across societies. A cultural norm codifies acceptable conduct in society; it serves as a guideline for behavior, dress, language, and demeanor in a situation, which serves as a template for expectations in a social group. Accepting only a monoculture in a social group can bear risks, just as a single species can wither in the face of environmental change, for lack of functional responses to the change. Thus in military culture, valor is counted as a typical behavior for an individual, and duty, honor, and loyalty to the social group are counted as virtues or functional responses in the continuum of conflict. In religion, analogous attributes can be identified in a social group. Cultural change, or repositioning, is the reconstruction of a cultural concept of a society. Cultures are internally affected by both forces encouraging change and forces resisting change. Cultures are externally affected via contact between societies. Organizations like UNESCO attempt to preserve culture and cultural heritage. Description Culture is considered a central concept in anthropology, encompassing the range of phenomena that are transmitted through social learning in human societies. Cultural universals are found in all human societies. These include expressive forms like art, music, dance, ritual, religion, and technologies like tool usage, cooking, shelter, and clothing. The concept of material culture covers the physical expressions of culture, such as technology, architecture and art, whereas the immaterial aspects of culture such as principles of social organization (including practices of political organization and social institutions), mythology, philosophy, literature (both written and oral), and science comprise the intangible cultural heritage of a society. In the humanities, one sense of culture as an attribute of the individual has been the degree to which they have cultivated a particular level of sophistication in the arts, sciences, education, or manners. The level of cultural sophistication has also sometimes been used to distinguish civilizations from less complex societies. Such hierarchical perspectives on culture are also found in class-based distinctions between a high culture of the social elite and a low culture, popular culture, or folk culture of the lower classes, distinguished by stratified access to cultural capital. In common parlance, culture is often used to refer specifically to the symbolic markers used by ethnic groups to distinguish themselves visibly from each other, such as body modification, clothing or jewelry. Mass culture refers to the mass-produced and mass-mediated forms of consumer culture that emerged in the 20th century. Some schools of philosophy, such as Marxism and critical theory, have argued that culture is often used politically as a tool of the elites to manipulate the proletariat and create a false consciousness. Such perspectives are common in the discipline of cultural studies. In the wider social sciences, the theoretical perspective of cultural materialism holds that human symbolic culture arises from the material conditions of human life, and that the basis of culture is found in evolved biological dispositions. When used as a count noun, a "culture" is the set of customs, traditions, and values of a society or community, such as an ethnic group or nation, and the knowledge acquired over time. In this sense, multiculturalism values the peaceful coexistence and mutual respect between different cultures inhabiting the same planet. Sometimes "culture" is also used to describe specific practices within a subgroup of a society, a subculture (e.g., "bro culture"), or a counterculture. Within cultural anthropology, the ideology and analytical stance of cultural relativism hold that cultures cannot easily be objectively ranked or evaluated because any evaluation is necessarily situated within the value system of a given culture. Etymology The modern term culture is based on a term used by the ancient Roman orator Cicero in his Tusculanae Disputationes, where he wrote of a cultivation of the soul or cultura animi, using an agricultural metaphor for the development of a philosophical soul, understood teleologically as the highest possible ideal for human development. Samuel von Pufendorf took over this metaphor in a modern context, meaning something similar, but no longer assuming philosophy was humanity's natural perfection. This use, and that of many writers, "refers to all the ways in which human beings overcome their original barbarism, and through artifice, become fully human". Edward S. Casey wrote, "The very word culture meant 'place tilled' in Middle English, and the same word goes back to Latin colere, 'to inhabit, care for, till, worship' and cultus, 'A cult, especially a religious one.' To be cultural, to have a culture, is to inhabit a place sufficiently intensely to cultivate it—to be responsible for it, to respond to it, to attend to it caringly." Culture described by Richard Velkley: ... originally meant the cultivation of the soul or mind, acquires most of its later modern meaning in the writings of the 18th-century German thinkers, who were on various levels developing Rousseau's criticism of "modern liberalism and Enlightenment". Thus a contrast between "culture" and "civilization" is usually implied in these authors, even when not expressed as such. In the words of anthropologist E. B. Tylor, it is "that complex whole which includes knowledge, belief, art, morals, law, custom and any other capabilities and habits acquired by man as a member of society". Alternatively, in a contemporary variant, "Culture is defined as a social domain that emphasizes the practices, discourses and material expressions, which, over time, express the continuities and discontinuities of social meaning of a life held in common. The Cambridge English Dictionary states that culture is "the way of life, especially the general customs and beliefs, of a particular group of people at a particular time." Terror management theory posits that culture is a series of activities and worldviews that provide humans with the basis for perceiving themselves as "person[s] of worth within the world of meaning"—raising themselves above the merely physical aspects of existence, in order to deny the animal insignificance and death that Homo sapiens became aware of when they acquired a larger brain. The word is used in a general sense as the evolved ability to categorize and represent experiences with symbols and to act imaginatively and creatively. This ability arose with the evolution of behavioral modernity in humans around 50,000 years ago and is often thought to be unique to humans. However, some other species have demonstrated similar, though less complicated, abilities for social learning. It is also used to denote the complex networks of practices and accumulated knowledge and ideas that are transmitted through social interaction and exist in specific human groups, or cultures, using the plural form. Change Raimon Panikkar identified 29 ways in which cultural change can be brought about, including growth, development, evolution, involution, renovation, reconception, reform, innovation, revivalism, revolution, mutation, progress, diffusion, osmosis, borrowing, eclecticism, syncretism, modernization, indigenization, and transformation. In this context, modernization could be viewed as adopting Enlightenment-era beliefs and practices, such as science, rationalism, industry, commerce, democracy, and the notion of progress. Rein Raud, building on the work of Umberto Eco, Pierre Bourdieu and Jeffrey C. Alexander, has proposed a model of cultural change based on claims and bids, which are judged by their cognitive adequacy and endorsed or not endorsed by the symbolic authority of the cultural community in question. Cultural invention has come to mean any innovation that is new and found to be useful to a group of people and expressed in their behavior, but which does not exist as a physical object. Humanity is in a global "accelerating culture change period," driven by the expansion of international commerce, the mass media, and above all, the human population explosion, among other factors. Culture repositioning means the reconstruction of the cultural concept of a society. Cultures are internally affected by both forces encouraging change and forces resisting change. These forces are related to both social structures and natural events and are involved in perpetuating cultural ideas and practices within current structures, which themselves are subject to change. Social conflict and the development of technologies can produce changes within a society by altering social dynamics and promoting new cultural models and spurring or enabling generative action. These social shifts may accompany ideological shifts and other types of cultural change. For example, the feminist movement involved new practices that produced a shift in gender relations, altering both gender and economic structures. Environmental conditions may also enter as factors. For example, after tropical forests returned at the end of the last ice age, plants suitable for domestication were available, leading to the invention of agriculture, which in turn brought about many cultural innovations and shifts in social dynamics. Cultures are externally affected via contact between societies, which may also produce—or inhibit—social shifts and changes in cultural practices. War or competition over resources may impact technological development or social dynamics. Additionally, cultural ideas may transfer from one society to another, through diffusion or acculturation. In diffusion, the form of something (though not necessarily its meaning) moves from one culture to another. For example, Western restaurant chains and culinary brands sparked curiosity and fascination to the Chinese as China opened its economy to international trade in the late 20th-century. "Stimulus diffusion" (the sharing of ideas) refers to an element of one culture leading to an invention or propagation in another. "Direct borrowing", on the other hand, tends to refer to technological or tangible diffusion from one culture to another. Diffusion of innovations theory presents a research-based model of why and when individuals and cultures adopt new ideas, practices, and products. Acculturation has different meanings. Still, in this context, it refers to the replacement of traits of one culture with another, such as what happened to certain Native American tribes and many indigenous peoples across the globe during colonization. Related processes on an individual level include assimilation and transculturation. The transnational flow of culture has played a major role in merging different cultures and sharing thoughts, ideas, and beliefs. Early modern discourses Immanuel Kant (1724–1804) formulated an individualist definition of "enlightenment" similar to the concept of bildung: "Enlightenment is man's emergence from his self-incurred immaturity." He argued that this immaturity comes not from a lack of understanding, but from a lack of courage to think independently. Against this intellectual cowardice, Kant urged: "Sapere Aude" ("Dare to be wise!"). In reaction to Kant, German scholars such as Johann Gottfried Herder (1744–1803) argued that human creativity, which necessarily takes unpredictable and highly diverse forms, is as important as human rationality. Moreover, Herder proposed a collective form of Bildung: "For Herder, Bildung was the totality of experiences that provide a coherent identity, and sense of common destiny, to a people." In 1795, the Prussian linguist and philosopher Wilhelm von Humboldt (1767–1835) called for an anthropology that would synthesize Kant's and Herder's interests. During the Romantic era, scholars in Germany, especially those concerned with nationalist movements—such as the nationalist struggle to create a "Germany" out of diverse principalities, and the nationalist struggles by ethnic minorities against the Austro-Hungarian Empire—developed a more inclusive notion of culture as "worldview" (Weltanschauung). According to this school of thought, each ethnic group has a distinct worldview that is incommensurable with the worldviews of other groups. Although more inclusive than earlier views, this approach to culture still allowed for distinctions between "civilized" and "primitive" or "tribal" cultures. In 1860, Adolf Bastian (1826–1905) argued for "the psychic unity of mankind". He proposed that a scientific comparison of all human societies would reveal that distinct worldviews consisted of the same basic elements. According to Bastian, all human societies share a set of "elementary ideas" (Elementargedanken); different cultures, or different "folk ideas" (Völkergedanken), are local modifications of the elementary ideas. This view paved the way for the modern understanding of culture. Franz Boas (1858–1942) was trained in this tradition, and he brought it with him when he left Germany for the United States. In the 19th century, humanists such as English poet and essayist Matthew Arnold (1822–1888) used the word "culture" to refer to an ideal of individual human refinement, of "the best that has been thought and said in the world". This concept of culture is also comparable to the German concept of bildung: "...culture being a pursuit of our total perfection by means of getting to know, on all the matters which most concern us, the best which has been thought and said in the world". In practice, culture referred to an elite ideal and was associated with such activities as art, classical music, and haute cuisine. As these forms were associated with urban life, "culture" was identified with "civilization" (from Latin: civitas, lit. 'city'). Another facet of the Romantic movement was an interest in folklore, which led to identifying a "culture" among non-elites. This distinction is often characterized as that between high culture, namely that of the ruling class, and low culture. In other words, the idea of "culture" that developed in Europe during the 18th and early 19th centuries reflected inequalities within European societies. Matthew Arnold contrasted "culture" with anarchy; other Europeans, following philosophers Thomas Hobbes and Jean-Jacques Rousseau, contrasted "culture" with "the state of nature". According to Hobbes and Rousseau, the Native Americans who were being conquered by Europeans from the 16th centuries on were living in a state of nature; this opposition was expressed through the contrast between "civilized" and "uncivilized". According to this way of thinking, one could classify some countries and nations as more civilized than others and some people as more cultured than others. This contrast led to Herbert Spencer's theory of Social Darwinism and Lewis Henry Morgan's theory of cultural evolution. Just as some critics have argued that the distinction between high and low cultures expresses the conflict between European elites and non-elites, other critics have argued that the distinction between civilized and uncivilized people is an expression of the conflict between European colonial powers and their colonial subjects. Other 19th-century critics, following Rousseau, have accepted this differentiation between higher and lower culture, but have seen the refinement and sophistication of high culture as corrupting and unnatural developments that obscure and distort people's essential nature. These critics considered folk music (as produced by "the folk," i.e., rural, illiterate, peasants) to honestly express a natural way of life, while classical music seemed superficial and decadent. Equally, this view often portrayed indigenous peoples as "noble savages" living authentic and unblemished lives, uncomplicated and uncorrupted by the highly stratified capitalist systems of Western culture. In 1870 the anthropologist Edward Tylor (1832–1917) applied these ideas of higher versus lower culture to propose a theory of the evolution of religion. According to this theory, religion evolves from more polytheistic to more monotheistic forms. In the process, he redefined culture as a diverse set of activities characteristic of all human societies. This view paved the way for the modern understanding of religion. Anthropology Although anthropologists worldwide refer to Tylor's definition of culture, in the 20th century "culture" emerged as the central and unifying concept of American anthropology, where it most commonly refers to the universal human capacity to classify and encode human experiences symbolically, and to communicate symbolically encoded experiences socially. American anthropology is organized into four fields, each of which plays an important role in research on culture: biological anthropology, linguistic anthropology, cultural anthropology, and in the United States and Canada, archaeology. The term Kulturbrille, or 'culture glasses', coined by German American anthropologist Franz Boas, refers to the "lenses" through which a person sees their own culture. Martin Lindstrom asserts that Kulturbrille, which allow a person to make sense of the culture they inhabit, "can blind us to things outsiders pick up immediately". Sociology The sociology of culture concerns culture as manifested in society. For sociologist Georg Simmel (1858–1918), culture referred to "the cultivation of individuals through the agency of external forms which have been objectified in the course of history". As such, culture in the sociological field can be defined as the ways of thinking, the ways of acting, and the material objects that together shape a people's way of life. Culture can be either of two types, non-material culture or material culture. Non-material culture refers to the non-physical ideas that individuals have about their culture, including values, belief systems, rules, norms, morals, language, organizations, and institutions, while material culture is the physical evidence of a culture in the objects and architecture they make or have made. The term tends to be relevant only in archeological and anthropological studies, but it specifically means all material evidence which can be attributed to culture, past or present. Cultural sociology first emerged in Weimar Germany (1918–1933), where sociologists such as Alfred Weber used the term Kultursoziologie ('cultural sociology'). Cultural sociology was then reinvented in the English-speaking world as a product of the cultural turn of the 1960s, which ushered in structuralist and postmodern approaches to social science. This type of cultural sociology may be loosely regarded as an approach incorporating cultural analysis and critical theory. Cultural sociologists tend to reject scientific methods, instead hermeneutically focusing on words, artifacts and symbols. Culture has since become an important concept across many branches of sociology, including resolutely scientific fields like social stratification and social network analysis. As a result, there has been a recent influx of quantitative sociologists to the field. Thus, there is now a growing group of sociologists of culture who are, confusingly, not cultural sociologists. These scholars reject the abstracted postmodern aspects of cultural sociology, and instead, look for a theoretical backing in the more scientific vein of social psychology and cognitive science. The sociology of culture grew from the intersection between sociology (as shaped by early theorists like Marx, Durkheim, and Weber) with the growing discipline of anthropology, wherein researchers pioneered ethnographic strategies for describing and analyzing a variety of cultures around the world. Part of the legacy of the early development of the field lingers in the methods (much of cultural, sociological research is qualitative), in the theories (a variety of critical approaches to sociology are central to current research communities), and in the substantive focus of the field. For instance, relationships between popular culture, political control, and social class were early and lasting concerns in the field. Cultural studies In the United Kingdom, sociologists and other scholars influenced by Marxism such as Stuart Hall (1932–2014) and Raymond Williams (1921–88) developed cultural studies. Following nineteenth-century Romantics, they identified culture with consumption goods and leisure activities (such as art, music, film, food, sports, and clothing). They saw patterns of consumption and leisure as determined by relations of production, which led them to focus on class relations and the organization of production. In the UK, cultural studies focuses largely on the study of popular culture; that is, on the social meanings of mass-produced consumer and leisure goods. Richard Hoggart coined the term in 1964 when he founded the Centre for Contemporary Cultural Studies or CCCS. Cultural studies in this sense, then, can be viewed as a limited concentration scoped on the intricacies of consumerism, which belongs to a wider culture sometimes referred to as Western civilization or globalism. From the 1970s onward, Stuart Hall's pioneering work, along with that of his colleagues Paul Willis, Dick Hebdige, Tony Jefferson, and Angela McRobbie, created an international intellectual movement. As the field developed, it began to combine political economy, communication, sociology, social theory, literary theory, media theory, film/video studies, cultural anthropology, philosophy, museum studies, and art history to study cultural phenomena or cultural texts. In this field researchers often concentrate on how particular phenomena relate to matters of ideology, nationality, ethnicity, social class, or gender. Cultural studies is concerned with the meaning and practices of everyday life. These practices comprise the ways people do particular things (such as watching television or eating out) in a given culture. It also studies the meanings and uses people attribute to various objects and practices. Specifically, culture involves those meanings and practices held independently of reason. Watching television to view a public perspective on a historical event should not be thought of as culture unless referring to the medium of television itself, which may have been selected culturally; however, schoolchildren watching television after school with their friends to "fit in" certainly qualifies since there is no grounded reason for one's participation in this practice. In the context of cultural studies, a text includes not only written language, but also films, photographs, fashion, or hairstyles: the texts of cultural studies comprise all the meaningful artifacts of culture. Similarly, the discipline widens the concept of culture. Culture, for a cultural-studies researcher, not only includes traditional high culture (the culture of the ruling social groups) and popular culture, but also everyday meanings and practices. The last two, in fact, have become the main focus of cultural studies. A further and recent approach is comparative cultural studies, based on the disciplines of comparative literature and cultural studies. Scholars in the UK and the US developed different versions of cultural studies after the 1970s. The British version of cultural studies had originated in the 1950s and 60s, mainly under the influence of Richard Hoggart, E. P. Thompson, and Raymond Williams, and later that of Stuart Hall and others at the Centre for Contemporary Cultural Studies. This included overtly political, left-wing views, and criticisms of popular culture as "capitalist" mass culture; it absorbed some of the ideas of the Frankfurt School critique of the "culture industry" i.e. mass culture. This emerges in the writings of early British cultural-studies scholars and their influences: see the work of Raymond Williams, Stuart Hall, Paul Willis, and Paul Gilroy. In the United States, Lindlof and Taylor write, "cultural studies [were] grounded in a pragmatic, liberal-pluralist tradition." The American version of cultural studies initially concerned itself more with understanding the subjective and appropriative side of audience reactions to, and uses of, mass culture; for example, American cultural-studies advocates wrote about the liberatory aspects of fandom. Some researchers, especially in early British cultural studies, apply a Marxist model to the field. This strain of thinking has some influence from the Frankfurt School, but especially from the structuralist Marxism of Louis Althusser and others. The main focus of an orthodox Marxist approach concentrates on the production of meaning. This model assumes a mass production of culture and identifies power as residing with those producing cultural artifacts. In a Marxist view, the mode and relations of production form the economic base of society, which constantly interacts and influences superstructures, such as culture. Other approaches to cultural studies, such as feminist cultural studies and later American developments of the field, distance themselves from this view. They criticize the Marxist assumption of a single, dominant meaning, shared by all, for any cultural product. The non-Marxist approaches suggest that different ways of consuming cultural artifacts affect the meaning of the product. This view comes through in the book Doing Cultural Studies: The Story of the Sony Walkman (by Paul du Gay et al.), which seeks to challenge the notion that those who produce commodities control the meanings that people attribute to them. Feminist cultural analyst, theorist, and art historian Griselda Pollock contributed to cultural studies from viewpoints of art history and psychoanalysis. The writer Julia Kristeva is among influential voices at the turn of the century, contributing to cultural studies from the field of art and psychoanalytical French feminism. Petrakis and Kostis (2013) divide cultural background variables into two main groups: In 2016, a new approach to culture was suggested by Rein Raud, who defines culture as the sum of resources available to human beings for making sense of their world and proposes a two-tiered approach, combining the study of texts (all reified meanings in circulation) and cultural practices (all repeatable actions that involve the production, dissemination or transmission of purposes), thus making it possible to re-link anthropological and sociological study of culture with the tradition of textual theory. A super culture is a collection of cultures and subcultures, that interact with one another, share similar characteristics and collectively have a degree of sense of unity.[citation needed] In other words, Super-culture is a culture encompassing several subcultures with common elements. Examples include: List of Super-cultures: Some ancient cultures that are also considered (termed) "Super-culture": Psychology Starting in the 1990s,: 31 psychological research on culture influence began to grow and challenge the universality assumed in general psychology.: 158–168 Culture psychologists began to try to explore the relationship between emotions and culture, and answer whether the human mind is independent from culture. For example, people from collectivistic cultures, such as the Japanese, suppress their positive emotions more than their American counterparts. Culture may affect the way that people experience and express emotions. On the other hand, some researchers try to look for differences between people's personalities across cultures. As different cultures dictate distinctive norms, culture shock is also studied to understand how people react when they are confronted with other cultures. LGBT culture is displayed with significantly different levels of tolerance within different cultures and nations. Cognitive tools may not be accessible or they may function differently cross culture.: 19 For example, people who are raised in a culture with an abacus are trained with distinctive reasoning style. Cultural lenses may also make people view the same outcome of events differently. Westerners are more motivated by their successes than their failures, while East Asians are better motivated by the avoidance of failure. Culture is important for psychologists to consider when understanding the human mental operation. The notion of the anxious, unstable, and rebellious adolescent has been criticized by experts, such as Robert Epstein, who state that an undeveloped brain is not the main cause of teenagers' turmoils. Some have criticized this understanding of adolescence, classifying it as a relatively recent phenomenon in human history created by modern society, and have been highly critical of what they view as the infantilization of young adults in American society. According to Robert Epstein and Jennifer, "American-style teen turmoil is absent in more than 100 cultures around the world, suggesting that such mayhem is not biologically inevitable. Second, the brain itself changes in response to experiences, raising the question of whether adolescent brain characteristics are the cause of teen tumult or rather the result of lifestyle and experiences." David Moshman has also stated in regards to adolescence that brain research "is crucial for a full picture, but it does not provide an ultimate explanation". Protection of culture There are a number of international agreements and national laws relating to the protection of cultural heritage and cultural diversity. UNESCO and its partner organizations such as Blue Shield International coordinate international protection and local implementation. The Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict and the UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expressions deal with the protection of culture. Article 27 of the Universal Declaration of Human Rights deals with cultural heritage in two ways: it gives people the right to participate in cultural life on the one hand and the right to the protection of their contributions to cultural life on the other. In the 21st century, the protection of culture has been the focus of increasing activity by national and international organizations. The United Nations and UNESCO promote cultural preservation and cultural diversity through declarations and legally-binding conventions or treaties. The aim is not to protect a person's property, but rather to preserve the cultural heritage of humanity, especially in the event of war and armed conflict. According to Karl von Habsburg, President of Blue Shield International, the destruction of cultural assets is also part of psychological warfare. The target of the attack is the identity of the opponent, which is why symbolic cultural assets become a main target. It is also intended to affect the particularly sensitive cultural memory, the growing cultural diversity and the economic basis (such as tourism) of a state, region or municipality. Tourism is having an increasing impact on the various forms of culture. On the one hand, this can be physical impact on individual objects or the destruction caused by increasing environmental pollution and, on the other hand, socio-cultural effects on society. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/French-Canadian_Americans] | [TOKENS: 999] |
Contents French-Canadian Americans Asia Middle East Europe North America South America Oceania French-Canadian Americans (French: Canadiens-Français des États-Unis; also referred to as Franco-Canadian Americans or Canadien Americans) are Americans of French Canadian descent. About 2 million U.S. residents cited this ancestry in the 2020 census. In the 2010 census, the majority of respondents reported speaking French at home. Americans of French-Canadian descent are most heavily concentrated in New England, New York State, Louisiana and the Midwest. Their ancestors mostly arrived in the United States from Quebec between 1840 and 1930, though some families became established as early as the 17th and 18th centuries. The term Canadien (French for "Canadian") may be used either in reference to nationality or ethnicity in regard to this population group. French-Canadian Americans, because of their proximity to Canada and Quebec, kept their language, culture, and religion alive much longer than any other ethnic group in the United States apart from Mexican Americans. Many "Little Canada" neighborhoods developed in New England cities, but gradually disappeared as their residents eventually assimilated into the American mainstream. A revival of the Canadian identity has taken place in the Midwestern states, where some families of French descent have lived for many generations. These states had been considered part of Canada until 1783. A return to their roots seems to be taking place, with a greater interest in all things that are Canadian or Québécois. French-Canadian population in New England In the late 19th century, many Francophones arrived in New England from Quebec and New Brunswick to work in textile mill cities in New England. In the same period, Francophones from Quebec soon became a majority of the workers in the saw mill and logging camps in the Adirondack Mountains and their foothills. Others sought opportunities for farming and other trades such as blacksmiths in Upstate New York. By the mid-20th century, French-Canadian Americans comprised 30 percent of Maine's population. Some migrants became lumberjacks but most concentrated in industrialized areas and into enclaves known as Little Canadas in cities like Lewiston, Maine, Holyoke, Massachusetts, and Woonsocket, Rhode Island. Driven by depleted farmlands, poverty and a lack of local economic opportunitunities, rural inhabitants of these areas sought work in the expanding mill industries. Newspapers in New England carried advertisements touting the desirability of wage labor work in the textile mills. In addition to industry's organized recruitment campaigns, the close kinship network of French-Canadians facilitated transnational communication and the awareness of economic opportunity for their friends and relatives. Individual French-Canadian families who desired dwellings developed French Canadian neighborhoods, called Petit Canadas, and sought out local financing. Most arrived through railroads such as the Grand Trunk Railroad. French-Canadian women saw New England as a place of opportunity and possibility where they could create economic alternatives for themselves distinct from the expectations of their farm families in Canada. By the early 20th century some saw temporary migration to the United States to work as a rite of passage and a time of self-discovery and self-reliance. Most moved permanently to the United States, using the inexpensive railroad system to visit Quebec from time to time. When these women did marry, they had fewer children with longer intervals between children than their Canadian counterparts. Some women never married, and oral accounts suggest that self-reliance and economic independence were important reasons for choosing work over marriage and motherhood. These women conformed to traditional gender ideals in order to retain their 'Canadienne' cultural identity, but they also redefined these roles in ways that provided them increased independence in their roles as wives and mothers. The French-Canadians became active in the Catholic Church where they tried with little success to challenge its domination by Irish clerics. They founded such newspapers as 'Le Messager' and 'La Justice.' The first hospital in Lewiston, Maine, became a reality in 1889 when the Sisters of Charity of Montreal, the "Grey Nuns", opened the doors of the Asylum of Our Lady of Lourdes. This hospital was central to the Grey Nuns' mission of providing social services for Lewiston's predominately French-Canadian mill workers. The Grey Nuns struggled to establish their institution despite meager financial resources, language barriers, and opposition from the established medical community. Immigration dwindled with the U.S. immigration restrictions after World War I. The French-Canadian community in New England tried to preserve some of its cultural norms. This doctrine, like efforts to preserve Francophone culture in Quebec, became known as la Survivance. Cities States French Canadian immigration to New England American cities founded by or named after French Canadians Notable French Canadian Americans See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Weak_artificial_intelligence#cite_ref-7] | [TOKENS: 594] |
Contents Weak artificial intelligence Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of the mind, or, as narrow AI, artificial narrow intelligence (ANI), is focused on one narrow task. Weak AI is contrasted with strong AI, which can be interpreted in various ways: Narrow AI can be classified as being "limited to a single, narrowly defined task. Most modern AI systems would be classified in this category." Artificial general intelligence is conversely the opposite. Applications and risks Some examples of narrow AI are AlphaGo, self-driving cars, robot systems used in the medical field, and diagnostic doctors. Narrow AI systems are sometimes dangerous if unreliable. And the behavior that it follows can become inconsistent. It could be difficult for the AI to grasp complex patterns and get to a solution that works reliably in various environments. This "brittleness" can cause it to fail in unpredictable ways. Narrow AI failures can sometimes have significant consequences. It could for example cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. Medicines could be incorrectly sorted and distributed. Also, medical diagnoses can ultimately have serious and sometimes deadly consequences if the AI is faulty or biased. Simple AI programs have already worked their way into society, oftentimes unnoticed by the public. Autocorrection for typing, speech recognition for speech-to-text programs, and vast expansions in the data science fields are examples. Narrow AI has also been the subject of some controversy, including resulting in unfair prison sentences, discrimination against women in the workplace for hiring, resulting in death via autonomous driving, among other cases. Despite being "narrow" AI, recommender systems are efficient at predicting user reactions based their posts, patterns, or trends. For instance, TikTok's "For You" algorithm can determine a user's interests or preferences in less than an hour. Some other social media AI systems are used to detect bots that may be involved in propaganda or other potentially malicious activities. Weak AI versus strong AI John Searle contests the possibility of strong AI (by which he means conscious AI). He further believes that the Turing test (created by Alan Turing and originally called the "imitation game", used to assess whether a machine can converse indistinguishably from a human) is not accurate or appropriate for testing whether an AI is "strong". Scholars such as Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling" (as, on the other hand, implied by the strong AI assumption). See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_note-space20130102-46] | [TOKENS: 11349] |
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Franks] | [TOKENS: 12893] |
Contents Franks The Franks (Latin: Franci or gens Francorum; German: Franken; French: Francs) were originally a group of Germanic peoples who lived near the Rhine-river military border of Germania Inferior, which was the most northerly province of the Roman Empire in continental Europe. The original Frankish language was West Germanic. These Frankish tribes lived for centuries under varying degrees of Roman hegemony and influence, but after the collapse of Roman institutions in western Europe, they took control of a large empire including areas that had been ruled by Rome, and what it meant to be a Frank began to evolve. Once they were deeply established in Gaul, the Franks became a multilingual, Catholic Christian people, who subsequently came to rule over several other post-Roman kingdoms both inside and outside the old empire. In a broader sense, much of the population of western Europe could eventually be described as Franks in some contexts. The term Frank itself first appeared in the 3rd century AD, during the crisis of the 3rd century, a period when Rome lost control of regions near the lower Rhine. In the 4th century, Roman authors also began to use another new collective term for enemy tribes in the lower Rhine, "Saxons". Although the Saxons and Franks were later more clearly distinguished, there are signs that the terms Frank and Saxon were not always mutually exclusive at first. Over centuries, the Romans recruited large numbers of Frankish soldiers, some of whom achieved high imperial rank. Already in the 4th century, Franks were living semi-independently in parts of Germania Inferior. The Roman administration of Britain and northern Gaul once again began to break down, and in about 406 it was the Franks who attempted to defend the Roman border when it was crossed by Alans and Vandals from eastern Europe. Frankish kings subsequently divided up Germania Inferior between them, and at least one, Chlodio, also began to rule more Romanized populations to the south, in what is now northern France. In 451, Frankish groups participated on both sides in the Battle of the Catalaunian Plains, where Attila and his allies were defeated by a Roman-led alliance of various peoples established in Gaul. By the early 6th century, the whole of Gaul north of the Loire, and all the Frankish kingdoms, were united within the kingdom of the Frank Clovis I, the founder of the Merovingian dynasty. By building upon the basis of this empire, the subsequent Frankish dynasty, the Carolingians, eventually came to be seen as the new emperors of Western Europe in 800, when Charlemagne was crowned by the pope. As the original Frankish communities merged into others, the term Frank lost its original meaning. In 870, the Frankish realm was permanently divided between western and eastern kingdoms, which were the predecessors of the later Kingdom of France and Holy Roman Empire respectively. The Latin term Franci, and equivalents in other languages, came to refer mainly to the people of the Kingdom of France, the forerunner of present day France. However, in various historical contexts, such as during the medieval crusades, not only the French, but also people from neighbouring regions in Western Europe, continued to be referred to collectively as Franks. The crusades in particular had a lasting impact on the use of Frank-related names, which are now used generically for all Western Europeans in many non-European languages. Name of the Franks The origins of the term Franci (singular Francus) are unclear, but by the 4th century it was commonly used as a collective term to refer to several tribes who were also known to the Romans by their own tribal names. It also became a more commonly used term than the older but much broader collective name Germani, which also covered many non-Frankish peoples such as the Alemanni and Marcomanni. Within a few centuries the term had eclipsed the names of the original peoples who constituted the Frankish population. After their conquest of northern Gaul, many Germanic-speaking Franks lived in communities where the majority population was not Frankish, and the dominant language was Gallo-Roman. However, as the Franks became more powerful, and more integrated with the peoples they ruled over, the name came to be more broadly applied, especially in what is now northern France. Christopher Wickham pointed out that "the word 'Frankish' quickly ceased to have an exclusive ethnic connotation. North of the River Loire everyone seems to have been considered a Frank by the mid-7th century at the latest (except Bretons); Romani (Romans) were essentially the inhabitants of Aquitaine after that". While the original meaning of the word is unclear, it is commonly believed to have a Germanic etymology. Following the precedents of Edward Gibbon and Jacob Grimm, the name of the Franks was traditionally linked Old French franc, and related terms such as the English adjective frank, meaning "free". This term is however derived from the term Frank itself, as it referred to their free status. Similarly, the word has been connected to a Germanic word for "javelin", reflected in words such as Old English franca or Old Norse frakka, but these terms possibly also derive from the name of the Franks, as the name of a Frankish weapon. (Alternatively, this Germanic word may share its origins with an older term recorded in Latin framea, which was used to describe the javelin used by Germani.) A common proposal to explain the ultimate origin of all these terms is that it meant "fierce". According to one version of this proposal, the name is related to a proto-Germanic word which has been reconstructed, *frekaz, which meant "greedy", but sometimes tended towards meanings such as "bold". It has descendants such as German frech (cheeky, shameless), Middle Dutch vrec (miserly), Old English frǣc (greedy, bold), and Old Norse frekr (brazen, greedy). The idea that the name of the Franks meant fierce is partly derived from classical allusions to their ferocity and unreliability as defining traits. For example, Eumenius rhetorically addressed the Franks when Frankish prisoners were executed at Trier by Constantine I in 306: Ubi nunc est illa ferocia? Ubi semper infida mobilitas? ("Where now is that ferocity of yours? Where is that ever untrustworthy fickleness?"). Isidore of Seville (died 636) said that there were two proposals known to him. Either the Franks took their name from a war leader who founded them, called Francus, or else their name referred to their wild manners (feritas morum). As societies changed, the name acquired new meanings, and the old Frankish community ceased to exist in its original form. In Europe in later times, it was mainly the inhabitants of the Kingdom of France who came to be referred to in Latin as the Franci (Franks), although new terms soon became more common, which connect the French to the earlier Franks, but also distinguish them. The modern English word "French" comes from the Old English word for "Frankish", Frenċisċ. Modern European terms such as French français and German Franzose, derive from Medieval Latin francensis meaning "from Francia", the country of the Franks, which for medieval people was France. In Medieval Latin, French people were also commonly referred to as francigenae, or "France-born". In more international contexts such as during the crusades in the Eastern Mediterranean, the term Frank was also used for any Europeans from Western and Central Europe who followed the Latin rites of Christianity under the authority of the pope in Rome. The use of the term Frank to refer to all western Europeans spread eastwards to many Asian languages (see Farang). Mythological origins Several accounts from Merovingian times report that some medieval Franks believed that their ancestors originally moved to their Rhineland homeland from the Roman province of Pannonia on the Danube. These include the History of the Franks which was written by Gregory of Tours in the 6th century, a 7th-century work known as the Chronicle of Fredegar, and the anonymous Liber Historiae Francorum, written a century later. While Gregory did not go deeply into the story, possibly because he rejected it, the other two sources report variants of the idea that, just as in the mythical origin story of the Romans created by Virgil, the Franks descended from Trojan royalty, who escaped from westwards after the Fall of Troy. Fredegar's version, which mentions the poet Virgil by name, connected the Franks not only to the Romans but also to the Phrygians, Macedonians, and Turks. He also reported that they built a new city on the Rhine named Troy after their ancestral home. The city he had in mind is likely to be the real Roman city now known as Xanten, based by the old Roman fort of Colonia Traiana, which was really named after Trajan, but was known as Troja minor (lesser Troy) in the Middle Ages. The other work, the Liber Historiae Francorum, adds an episode to the story whereby the Pannonian Franks instead founded a city called Sicambria in Pannonia, and while there they fought successfully for a Roman emperor named Valentinian against the Alans, near the Sea of Azov, where the Franks themselves had, according to the story, previously lived themselves before moving to Pannonia. This city name appears to be based upon the Sicambri who were one of the most well-known tribes in the Frankish Rhine homeland in the time of the early Roman empire. According to the story the Franks were forced to leave Pannonia, after rebelling against Roman taxes. In reality, the Franks had been living near the Rhine for centuries before the Valentinian dynasty really did confront the Alans, which happened in the late 4th century. It has been suggested that this element in the story may preserve stories from Frankish officers who served the dynasty against the Alans in southeastern Europe, such as Merobaudes. The story might also be influenced by memories of the later Frankish defence of the Roman empire during the subsequent entrance of Alans and other peoples into Gaul in about 406 AD, many of whom had previously been living in or near Pannonia. In particular, the Alans and other peoples who arrived from Pannonia were well-known to later generations of Franks and Romans in northern Gaul. A kingdom of Alans was founded near Orleans after 406, and Attila's Hun alliance, also based near Pannonia, invaded Gaul in 451. The name "Sicambria" can be explained as a derivative of the idea found in Graeco-Roman literature, that the Sicambri were ancestors of the later Franks, although in reality they had lived near the Rhine, like the Franks. On the other hand, concerning the Trojan element in the Frankish origin stories, historian Patrick J. Geary has for example written that they are "alike in betraying both the fact that the Franks knew little about their background and that they may have felt some inferiority in comparison with other peoples of antiquity who possessed an ancient name and glorious tradition." History The term "Franks" was probably first used during the Crisis of the Third Century (235–284 AD). However, most of the sources which mention Franks in this period were written much later, and cannot provide conclusive evidence about third-century terminology. In some cases, specific older tribes were explicitly categorized as Franks, including the Chamavi, Bructeri, and Chattuari. The Chamavi are called Franks in the Tabula Peutingeriana, a 13th-century copy of a 4th or 5th century atlas of Roman roads that reflects information from the 3rd century. The Chattuari were described as Franks living across from Xanten in an account of a Roman attack upon them in 360, and the Bructeri were also described as Franks living across from Cologne in an account of a Roman attack in 392/393. Archaeological evidence confirms that from around 250, the period when the Frankish name apparently first appeared, there was a massive decrease in population in many northern parts of Germania Inferior including cities. Several regions around the Rhine-Meuse and Scheldt deltas, remained relatively unpopulated until around 400. Roymans and Heeren proposed that one possible explanation for this sudden depopulation is that the Roman emperors deported very large numbers of rebellious locals out of the region. Productive agricultural land was abandoned on a large scale, making the Roman military along the Rhine highly dependent on grain imports from other provinces. Although the Rhine forts did not cease to function completely, the districts around the delta were "dispensed with once and for all as tax-paying administrative units". It has been noted by scholars of the earliest records mentioning Franks that there are also surprisingly frequent references to them raiding by sea. In contrast, later records describe the Frankish tribes living inland, separated from the sea by the Frisians and Saxons. It appears that in the third and fourth centuries the Romans did not yet clearly distinguish the sea-going Saxons, another new category of people in this period, from the Franks and Frisians. There are indications that the coastal Frisians who were always distinguished from the Franks in later records, as well as their original eastern neighbours the Chauci, may have contributed to the ethnogenesis of both the Saxons and the Franks. It is even speculated that the so-called Salian Franks, who appear only in records from around 378, may have originally been a Frisian or Chauci tribe. The earliest mention of Franks in the Augustan History is very uncertain. This is a much-later written collection of biographies of Roman emperors, which modern scholars believe to be largely fabricated. In its biography of the emperor Aurelian (reigned 270–275) it says that before being emperor he was at Mainz as "tribune of the Sixth Legion, the Gallican", a legion known from no other record, when he "crushed the Franks, who had burst into Gaul and were roving about through the whole country". He supposedly killed seven hundred of them and captured three hundred, selling them as slaves, and a song was supposedly composed about him: "Franks, Sarmatians by the thousand, once and once again we've slain. Now we seek a thousand Persians" (Mille Sarmatas, mille Francos semel et semel occidimus, mille Persas quaerimus). While the naming of the Franks within a supposedly popular song may seem unlikely to be fabricated, even this is considered likely by some scholars. If real though, the song would have come into being before 270 when Aurelian became emperor, and the events themselves would have been around 245 to 253. Other late sources for this period are considered somewhat more reliable. However, most of them did not use the term Frank, but less specific terms such as Germani or "barbarians". Around 256/257 Germani crossed the Rhine and attacked Gaul. Some were Alemanni, who went on to invade Italy from Gaul. By 258/259 other Germani had gotten as far as Tarragona in Spain, and these even acquired ships in Spain with which they attacked North Africa. According to Aurelius Victor writing in the 4th century this latter group were Franks. In the aftermath, Postumus (emperor of the breakaway Gallic Empire 260-268) apparently managed to stabilize the border, and recruited Franks into his army, using them against his rival Gallienus. Throughout the 260s and 270s very few surviving records explicitly mention the Franks, although the barbarians of the later Frankish region were very active. Gallienus reigned solo from 260 to 268, and during this period the document known as the Laterculus Veronensis, which was made about 314, notes that the Romans lost five civitates (small countries) along the eastern bank of the Lower Rhine. The three which are legible are those of the Usipii, Tubantes, and Chattuari. These probably all became Frankish. During this period, the 260s, archaeologists also note an increase in coin hoards in populations on the Roman side the Rhine, in Tongeren, Amiens, Beauvais, Trier, Metz, Toul, and Chalon-sur-Saône attesting to Frankish activity in this region. Under the last Gallic emperor Tetricus, (who reigned 270–274), there are even more hoard finds, and evidence of military conflicts. In 275/276, after the death of Tetricus and the reunification of the empire under Probus (reigned 276–282) archaeologists believe that a larger incursion into Gaul occurred, with the main thrust seemingly along the Meuse. In the context of these conflicts, Trier itself fell to an attack. The only involved barbarian group who is named by Roman sources are the Franks, mentioned by Zosimus. Probus subsequently appears to have restabilized the border. About 280, while Probus was confronted with a rebel named Proculus, the 8th Latin Panegyric, of 297, reports that some captive Franks seized some ships, and "plundered their way from the Black Sea right to Greece and Asia and, driven not without causing damage from very many parts of the Libyan shore, finally took Syracuse itself", and eventually made it back to their homeland via the Ocean. In 281 Probus captured and killed Proculus and the Historia Augusta account of this says that it was the Franks who handed him over, because he had fled to them, having Frankish origins himself. Before 286, Eutropius the historian, writing in the 4th century, and Orosius, writing around 400, reported that emperor Maximian assigned Carausius to lead a naval force to pacify the English channel coasts of Roman Belgica, and Armorica, because these waters were infested by Frankish and Saxon pirates. This is also one of the first uses of the term Saxon, which was subsequently used for seagoing Germanic raiders. The first contemporary record using the term Frank is the so-called 11th Latin Panegyric written in 291. Taken in combination with the 10th panegyric 289, these records indicate that in the winter of 287/288 Maximian, based in Trier at this time, forced a Frankish king Genobaud and his people to become Roman clients. Probably connected to this, Maximian had recently had at least one successful campaign east of the Rhine. Elsewhere the 11th panegyric also specifically mentions Franks being subdued in this period. In 293/294, Constantius Chlorus, son-in-law of Maximian, and father of Constantine I defeated Franks in the Rhine-Meuse-Scheldt delta. Various groups had settled south of the Rhine within the empire, but were living outside of Roman governance while Carausius rebelled. Eumenius mentions Constantius as having "killed, expelled, captured [and] kidnapped" the Franks who had settled there and others who had crossed the Rhine, using the term nationes Franciae for the first time, indicating that the Franks were seen as more than one tribe or nation. The 6th Latin Panegyric written in 310 says that the diverse tribes of Franks who had been ruling Batavia were under the leadership of Carausius. The 8th Latin Panegyric written in 297 is commonly interpreted as naming two of these peoples conquered in this campaign as the Chamavi and Frisians, which makes it likely (but not certain) that both these peoples were considered Franks in this period. The same panegyric also emphasizes that there were Franks among barbarian mercenaries of Carausius based in and around London. In 308, Constantine the Great executed two "kings of Francia", Ascaric and Merogaisus, who violated the peace after the death of his father Constantius, and then "so that the enemy should not merely grieve over the punishment of their kings" made a devastating raid on the Bructeri, and built a bridge over the Rhine at Cologne to "lord it over the remnants of a shattered nation". Further north, the Panegyric celebrated Constantine's pacification of the Rhine by claiming that Roman farmers can now safely farm on the banks of both arms of the Rhine in Batavia. The later "4th" panegyric of 321 lists Bructeri, Chamavi, Cherusci, Lancionae, Alamanni, and Tubantes as peoples Constantine fought against successfully, who eventually formed an alliance against him. Several or all of these people were probably involved in the major field battle on the Rhine in 313, which is reported in the "12th" panegyric. The same panegyric of 321 gives the Franks "who are more ferocious than other nations", one last seagoing role, saying they "held even the coasts of Spain infested with arms when a large number of them spread abroad beyond the Ocean itself in an outburst of fury in their passion to make war". It calls the Franks a "nation which is fecund to its own detriment". The Laterculus Veronensis, a list of barbarian nations under Roman domination which was made about 314, lists Saxons and Franks separately from several of the older Rhineland tribal names including the Chamavi ("Camari"), Cattuari ("Gallouari"), Amsiuari, Angriuari, Bructeri, and Cati. In 341 the emperor Constans I, one of the sons of Constantine, attacked the Franks in the Rhine delta, and in 342 the situation was pacified. Scholars speculate that some Franks were given permission to remain in the area at this time. In 350 Magnentius, described by contemporaries as someone having Frankish and Saxon ancestry, became a rebel emperor. He killed Constans I, and took control of much of the western empire, battling the brother of Constans, Constantius II for control. During his revolt, which lasted until 353, the Rhine borders were undermanned and barbarians were able to enter Gaul. At the Battle of Mursa Major Roman soldiers, including many with Frankish and Saxon backgrounds, fought each other, further weakening Rome's ability to defend itself. Magnentius finally died in Lyon in 353. Silvanus, one of his main commanders, who had defected to Constantius, and also had Frankish ancestry, was given the task of rebuilding defences in Gaul. However, being accused of plotting to become emperor, he decided to really make an attempt in 355 and was killed soon afterwards. In the spring of 358 the Salian Franks were described under that name for the only time in written history, and important new agreements were made between Franks and Romans. Julian the Apostate commanding Roman forces in Gaul, and not yet an emperor, made a rapid attack against both the Salians and the Chamavi, who were both making inroads within Roman territory around the Rhine-Meuse delta. The reason for this was primarily that he needed to ensure the arrival of 600 grain carrying ships coming up the rivers from Britain, and he preferred not to simply pay the tribes off, as previous administrators had been doing. Similar accounts are given by Julian himself in his letter to the Athenians, Ammianus Marcellinus who served under him, Libanius who wrote his funeral oration, and the later Greek historians Eunapius and Zosimus. He first confronted the people who Ammianus called "Franks who are customarily called Salians". Julian says he received the submission of part of the Salian tribe, but does not call them Franks. Zosimus says the Salians were descended from the Franks. According to Eunapius the Salians were allowed by Julian to holds lands which they had not fought for. Ammianus indicates that they had been settling in Roman Texandria, south of the delta, which modern scholars believe was lightly populated at this time. However, Zosimus explains that they had previously been settled on the large island of Batavia in the delta, until an invasion by a people who Zosimus called the "Quadi", and described as "Saxons". Zosimus also reports that before settling in Batavia, which had once been Roman ruled, the Salians had previously lived outside the empire and similarly been forced by Saxons to move. Historians speculate that they may have been given permission by the Romans to settle in Texandria already in 342. According to Zosimus, the Franks near the delta had been defending the Roman lands against Saxon raids, so that the "Quadi" had been forced to build boats, in which they sailed along the Rhine beyond the territory of the Franks, in order to enter the Roman empire. Eunapius says that Julian instructed his men not to hurt the Salians. The people who Zosimus calls Saxons or Quadi are called Chamavi by the other sources. (The Chamavi are treated as Franks in other records, but Zosimus contrasted them with the Franks.) Despite these differences in terminology, Zosimus and Eunapius both remark how the barbarian Charietto was brought from Trier to neutralize this group's raiding, and how Julian captured the son of their king. Julian reported to the Athenians that he subsequently ejected them from lands, and took captives, and cattle. However both Eunapius and Julian make it clear that he also needed an agreement with the Chamavi in order to secure a safe passage for food supplies. All later references to the Salians as a people, as opposed to the much later legal code, could be connected to these events. The 5th century Notitia Dignitatum mentions three military units whose names include the term "Salii", all three of which were created by Julian, who also created three parallel Tubantes units: the Salii and the Salii seniores, who both belonged to the auxilia palatina, and the Salii (iuniores) Gallicani. However in this period units did not necessary recruit from the barbarian groups they were named after. The tribe was also mentioned in a poetic way twice by fifth century poets, Claudius Claudianus and Sidonius Apollinaris. Historian Matthias Springer has argued that the Salian name was not really their tribal name, but rather a Germanic word meaning something like "comrades". He proposed that the Salians were just called Franks. According to Springer, the Salic law first mentioned centuries later is derived from the same word, but has no specific ethnic connotation, being simply the customary law holding for non-Roman free men. In 360/361 Julian crossed the Rhine near Xanten and defeated the Chattuari, who were described as Franks in records of this event. During the late 360s, after the death of Julian, the "second" Latin Panegyric indicates that Count Theodosius fought and won an infantry campaign in Batavia, and perhaps also a naval campaign in the Maas and Waal rivers which surround it. The details are not explained in this or any other record, but other records mention that northern Gaul was afflicted by Saxon sea raiders and Frankish land raiders in this period. The archaeological evidence for the late fourth century suggestions that the population remained low in the northern part of Roman Germania Inferior until almost 400. During the reigns of Emperors of the Valentinian dynasty four franks served as magistri militum (commanders-in-chief of the imperial army): In 388, Arbogast entered the Frankish frontier region personally, and faced a Frankish invasion. In this year, Arbogast went to Trier on the orders of Theodosius and assassinated Victor, the son and heir of the recently executed Gaulish usurper Magnus Maximus. In the previous year, while Maximus had been attempting to take control of Italy, Franks under the command of three war leaders, Marcomer, Sunno and Genobaud, had crossed the Rhine and raided deep into Roman Gaul. Some returned over the Rhine successfully with their plunder, while others entered the Silva Carbonaria, a forest in present day Belgium, where they were tracked down by Roman forces. Roman forces that tried to pursue the Franks over the Rhine were however cut to pieces. After the death of Maximus, Arbogast urged action. He met Marcomer and Sunno and demanded hostages, and then based himself in Trier. After the death of Valentinian II, Arbogast took advantage of the leaves falling, and went to Cologne. He crossed the bridge there into the country of the Bructeri, and plundered it, and then also plundered the region inhabited by the Chamavi. The Franks did not engage with him although some Ampsivarii and Chatti under the command of Marcomer appeared on the ridge of a distant hill. By this time Arbogast had created his own usurper emperor, Eugenius. Eugenius was captured and executed by Theodosius after the Battle of the Frigidus in 394, and Arbogast subsequently committed suicide. Under Theodosius the Great (emperor 379–395), the new magister militum on the Rhine, Stilicho managed to pacify Germania Inferior for a short time. However, the prefecture of Gaul was relocated from Trier, near the Franks, to Vienne in what is now southern France, and then still further away to Arles, closer to Italy. After the death of Theodosius, Stilicho became more powerful, because Honorius the son of Theodosius was still young. In about 401/402 Stilicho moved Rhine forces to assist with the wars against the Goths in other parts of the empire. Large numbers of people from Central Europe, including Romans from Pannonia, moved west, crossed the Rhine, and entered Gaul. In about 406 a large force of Alans and Vandals from eastern Europe confronted the Rhine and in the ensuing Vandal–Frankish war it was the Franks who attempted to block them from passing into Gaul. They succeeded in killing one of the Alan kings, Respendial. In 407, with Gaul and Brittania in chaos and unprotected, another Roman usurper arose there to try to pacify the situation, Constantine III. Stilicho was killed in 408. By about 409 most of these Alans and Vandals had moved to Roman Hispania. The Franks took control of the area around Trier. Constantine III died in 411, and a new usurper Jovinus was proclaimed. Within a few decades Trier was taken and plundered by the Franks at least three times. Northern Gaul was no longer effectively being governed by the Roman empire although Roman military commanders were clearly still present there sometimes. Archaeological evidence indicates sudden immigration of people into Germania Inferior who introduced rye consumption, and new building and clothing styles. Their jewellery and pottery styles match styles found in what is now northern Germany. There are also signs that Roman gold which started entering the area east of the Rhine around 370, also now started to arrive within the empire itself. Roymans and Heeren suggest that usurpers such as Constantine III will have needed to pay off Frankish allies, and that such Franks later started to settle west of the Rhine. By the 440s a Frankish king named Chlodio pushed beyond Germania Inferior into more Romanized lands south of the "Silva Carbonaria" or "Charcoal forest", which was south of modern Brussels. He conquered Tournai, Artois, Cambrai, and probably reached as far as the Somme river, in the Roman province of Belgica Secunda in what is now northern France. Chlodio is believed to be the ancestor of the future Merovingian dynasty. From his base in Pannonia and the Middle Danube, Attila and his allies launched a major invasion into Gaul, where they were defeated by a Roman led alliance under the command of Flavius Aetius at the Battle of the Catalaunian Plains in 451. Franks fought on both sides. Jordanes, in his Getica mentions a group called the "Riparii" as auxiliaries of during the Battle of Châlons in 451, and distinct from the "Franci", but these Riparii ("river dwellers") are today not considered to be Ripuarian Franks, but rather a known military unit based on the Rhône. Childeric I, who according to Gregory of Tours was a reputed descendant of Chlodio, was later seen as administrative ruler over Roman Belgica Secunda and possibly other areas. Records mentioning Childeric show he was active together with Roman forces in the Loire region. The area between the Loire and the Silva Carbonaria became the core of what would become medieval France. Childeric's son Clovis I also took control of the more independent Frankish kingdoms to the north and east, corresponding roughly to Roman Germania Inferior, which included Cologne, and Belgica I, which included Trier. This eastern Frankish region later evolved to become the medieval Frankish region called Austrasia, and still later Lotharingia. Childeric and his son Clovis I faced competition from the Roman Aegidius as competitor for the "kingship" of the Franks associated with the Roman Loire forces (according to Gregory of Tours, Aegidius held the kingship of the Franks for 8 years while Childeric was in exile). This new type of kingship, perhaps inspired by Alaric I, represents the start of the Merovingian dynasty which succeeded in conquering most of Gaul in the 6th century, as well as establishing its leadership over all the Frankish kingdoms on the Rhine frontier. Aegidius died in 464 or 465. Childeric and his son Clovis I were both described as rulers of the Roman Province of Belgica Secunda, by its spiritual leader in the time of Clovis, Saint Remigius. Clovis later defeated the son of Aegidius, Syagrius, in 486 or 487 and then had the Frankish king Chararic imprisoned and executed. A few years later, he killed Ragnachar, the Frankish king of Cambrai, and his brothers. After conquering the Kingdom of Soissons and expelling the Visigoths from southern Gaul at the Battle of Vouillé, he established Frankish hegemony over most of Gaul, excluding Burgundy, Provence and Brittany, which were eventually absorbed by his successors. By the 490s, he had conquered all the Frankish kingdoms to the west of the River Maas except for the Ripuarian Franks and was in a position to make the city of Paris his capital. He became the first king of all Franks in 509, after he had conquered Cologne. Clovis I divided his realm between his four sons, who united to defeat Burgundy in 534. Internecine feuding occurred during the reigns of the brothers Sigebert I and Chilperic I, which was largely fuelled by the rivalry of their queens, Brunhilda and Fredegunda, and which continued during the reigns of their sons and their grandsons. Three distinct subkingdoms emerged: Austrasia, Neustria and Burgundy, each of which developed independently and sought to exert influence over the others. The influence of the Arnulfing clan of Austrasia ensured that the political centre of gravity in the kingdom gradually shifted eastwards to the Rhineland. The Frankish realm was reunited in 613 by Chlothar II, the son of Chilperic, who granted his nobles the Edict of Paris in an effort to reduce corruption and reassert his authority. Following the military successes of his son and successor Dagobert I, royal authority rapidly declined under a series of kings, traditionally known as les rois fainéants. After the Battle of Tertry in 687, each mayor of the palace, who had formerly been the king's chief household official, effectively held power until in 751, with the approval of the Pope and the nobility, Pepin the Short deposed the last Merovingian king Childeric III and had himself crowned. This inaugurated a new dynasty, the Carolingians. The unification achieved by the Merovingians ensured the continuation of what has become known as the Carolingian Renaissance. The Carolingian Empire was beset by internecine warfare, but the combination of Frankish rule and Roman Christianity ensured that it was fundamentally united. Frankish government and culture depended very much upon each ruler and his aims and so each region of the empire developed differently. Although a ruler's aims depended upon the political alliances of his family, the leading families of Francia shared the same basic beliefs and ideas of government, which had both Roman and Germanic roots.[citation needed] The Frankish state consolidated its hold over the majority of western Europe by the end of the 8th century, developing into the Carolingian Empire. With the coronation of their ruler Charlemagne as Holy Roman Emperor by Pope Leo III in 800, he and his successors were recognised as legitimate successors to the emperors of the Western Roman Empire. As such, the Carolingian Empire gradually came to be seen in the West as a continuation of the ancient Roman Empire. This empire would give rise to several successor states, including France, the Holy Roman Empire and Burgundy. After the death of Charlemagne, his only adult surviving son became Emperor and King Louis the Pious. Following Louis the Pious's death, however, according to Frankish culture and law that demanded equality among all living male adult heirs, the Frankish Empire was now split between Louis' three sons. Military Germanic peoples, including those tribes in the Rhine delta that later became the Franks, are known to have served in the Roman army since the days of Julius Caesar. After the Roman administration collapsed in Gaul in the 260s, the armies under the Germanic Batavian Postumus revolted and proclaimed him emperor and then restored order. From then on, Germanic soldiers in the Roman army, most notably Franks, were promoted from the ranks. A few decades later, the Menapian Carausius proclaimed himself a co-emperor and based himself in Britain. His military included Frankish soldiers. Later Frankish soldiers such as Magnentius, Silvanus, Ricomer and Bauto held command positions in the Roman army during the mid 4th century. From the narrative of Ammianus Marcellinus it is evident that both Frankish and Alamannic tribal armies were organised along Roman lines. In the fifth century, the Roman armies at the Rhine border became a Frankish "franchise", and Franks were known to levy Roman-like troops that were supported by a Roman-like armour and weapons industry. This lasted at least until the days of the scholar Procopius (c. 500 – c. 565), more than a century after the demise of the Western Roman Empire, who wrote describing the former Arborychoi, having merged with the Franks, retaining their legionary organization in the style of their forefathers during Roman times. The Franks under the Merovingians melded Germanic custom with Romanised organisation and several important tactical innovations. The primary sources for Frankish military custom and armament are Ammianus Marcellinus, Agathias and Procopius, the latter two Eastern Roman historians writing about Frankish intervention in the Gothic War. Writing of 539, Procopius says: At this time the Franks, hearing that both the Goths and Romans had suffered severely by the war ... forgetting for the moment their oaths and treaties ... (for this nation in matters of trust is the most treacherous in the world), they straightway gathered to the number of one hundred thousand under the leadership of Theudebert I and marched into Italy: they had a small body of cavalry about their leader, and these were the only ones armed with spears, while all the rest were foot soldiers having neither bows nor spears, but each man carried a sword and shield and one axe. Now the iron head of this weapon was thick and exceedingly sharp on both sides, while the wooden handle was very short. And they are accustomed always to throw these axes at a signal in the first charge and thus to shatter the shields of the enemy and kill the men. His contemporary, Agathias, who based his own writings upon the tropes laid down by Procopius, says: The military equipment of this people [the Franks] is very simple ... They do not know the use of the coat of mail or greaves and the majority leave the head uncovered, only a few wear the helmet. They have their chests bare and backs naked to the loins, they cover their thighs with either leather or linen. They do not serve on horseback except in very rare cases. Fighting on foot is both habitual and a national custom and they are proficient in this. At the hip they wear a sword and on the left side their shield is attached. They have neither bows nor slings, no missile weapons except the double edged axe and the angon which they use most often. The angons are spears which are neither very short nor very long. They can be used, if necessary, for throwing like a javelin, and also in hand to hand combat. In the Strategikon, supposedly written by the emperor Maurice, or in his time, the Franks are lumped together with the Lombards under the heading of the "fair-haired" peoples. If they are hard pressed in cavalry actions, they dismount at a single prearranged sign and line up on foot. Although only a few against many horsemen, they do not shrink from the fight. They are armed with shields, lances, and short swords slung from their shoulders. They prefer fighting on foot and rapid charges. [...] Either on horseback or on foot they are impetuous and undisciplined in charging, as if they were the only people in the world who are not cowards. While the above quotations have been used as a statement of the military practices of the Frankish nation in the 6th century and have even been extrapolated to the entire period preceding Charles Martel's reforms (early mid-8th century), post-Second World War historiography has emphasised the inherited Roman characteristics of the Frankish military from the date of the beginning of the conquest of Gaul. The Byzantine authors present several contradictions and difficulties. Procopius denies the Franks the use of the spear while Agathias makes it one of their primary weapons. They agree that the Franks were primarily infantrymen, threw axes and carried a sword and shield. Both writers also contradict the authority of Gallic authors of the same general time period (Sidonius Apollinaris and Gregory of Tours) and the archaeological evidence. The Lex Ribuaria, the early 7th century legal code of the Rhineland or Ripuarian Franks, specifies the values of various goods when paying a wergild in kind; whereas a spear and shield were worth only two solidi, a sword and scabbard were valued at seven, a helmet at six, and a "metal tunic" at twelve. Scramasaxes and arrowheads are numerous in Frankish graves even though the Byzantine historians do not assign them to the Franks. The evidence of Gregory and of the Lex Salica implies that the early Franks were a cavalry people. In fact, some modern historians have hypothesised that the Franks possessed so numerous a body of horses that they could use them to plough fields and thus were agriculturally technologically advanced over their neighbours. The Lex Ribuaria specifies that a mare's value was the same as that of an ox or of a shield and spear, two solidi and a stallion seven or the same as a sword and scabbard, which suggests that horses were relatively common. Perhaps the Byzantine writers considered the Frankish horse to be insignificant relative to the Greek cavalry, which is probably accurate. The Frankish military establishment incorporated many of the pre-existing Roman institutions in Gaul, especially during and after the conquests of Clovis I in the late 5th and early 6th centuries. Frankish military strategy revolved around the holding and taking of fortified centres (castra) and in general these centres were held by garrisons of milities and laeti, who were descendants of Roman soldiers with Germanic origin, granted a quasi-national status under Frankish law. These milites continued to be commanded by tribunes. Throughout Gaul, the descendants of Roman soldiers continued to wear their uniforms and perform their ceremonial duties. Immediately beneath the Frankish king in the military hierarchy were the leudes, his sworn followers, who were generally 'old soldiers' in service away from court. The king had an elite bodyguard called the truste. Members of the truste often served in centannae, garrison settlements that were established for military and police purposes. The day-to-day bodyguard of the king was made up of antrustiones (senior soldiers who were aristocrats in military service) and pueri (junior soldiers and not aristocrats). All high-ranking men had pueri. The Frankish military was not composed solely of Franks and Gallo-Romans, but also contained Saxons, Alans, Taifals and Alemanni. After the conquest of Burgundy (534), the well-organised military institutions of that kingdom were integrated into the Frankish realm. Chief among these was the standing army under the command of the Patrician of Burgundy. In the late 6th century, during the wars instigated by Fredegund and Brunhilda, the Merovingian monarchs introduced a new element into their militaries: the local levy. A levy consisted of all the able-bodied men of a district who were required to report for military service when called upon, similar to conscription. The local levy applied only to a city and its environs. Initially only in certain cities in western Gaul, in Neustria and Aquitaine, did the kings possess the right or power to call up the levy. The commanders of the local levies were always different from the commanders of the urban garrisons. Often the former were commanded by the counts of the districts. A much rarer occurrence was the general levy, which applied to the entire kingdom and included peasants (pauperes and inferiores). General levies could also be made within the still-pagan trans-Rhenish stem duchies on the orders of a monarch. The Saxons, Alemanni and Thuringi all had the institution of the levy and the Frankish monarchs could depend upon their levies until the mid-7th century, when the stem dukes began to sever their ties to the monarchy. Radulf of Thuringia called up the levy for a war against Sigebert III in 640. Soon the local levy spread to Austrasia and the less Romanised regions of Gaul. On an intermediate level, the kings began calling up territorial levies from the regions of Austrasia (which did not have major cities of Roman origin). All the forms of the levy gradually disappeared, however, in the course of the 7th century after the reign of Dagobert I. Under the so-called rois fainéants, the levies disappeared by mid-century in Austrasia and later in Burgundy and Neustria. Only in Aquitaine, which was fast becoming independent of the central Frankish monarchy, did complex military institutions persist into the 8th century. In the final half of the 7th century and first half of the 8th in Merovingian Gaul, the chief military actors became the lay and ecclesiastical magnates with their bands of armed followers called retainers. The other aspects of the Merovingian military, mostly Roman in origin or innovations of powerful kings, disappeared from the scene by the 8th century. Merovingian armies used coats of mail, helmets, shields, lances, swords, bows and arrows and war horses. The armament of private armies resembled those of the Gallo-Roman potentiatores of the late Empire. A strong element of Alanic cavalry settled in Armorica influenced the fighting style of the Bretons down into the 12th century. Local urban levies could be reasonably well-armed and even mounted, but the more general levies were composed of pauperes and inferiores, who were mostly farmers by trade and carried ineffective weapons, such as farming implements. The peoples east of the Rhine – Franks, Saxons and even Wends – who were sometimes called upon to serve, wore rudimentary armour and carried weapons such as spears and axes. Few of these men were mounted.[citation needed] Merovingian society had a militarised nature. The Franks called annual meetings every Marchfeld (1 March), when the king and his nobles assembled in large open fields and determined their targets for the next campaigning season. The meetings were a show of strength on behalf of the monarch and a way for him to retain loyalty among his troops. In their civil wars, the Merovingian kings concentrated on the holding of fortified places and the use of siege engines. In wars waged against external foes, the objective was typically the acquisition of booty or the enforcement of tribute. Only in the lands beyond the Rhine did the Merovingians seek to extend political control over their neighbours. Tactically, the Merovingians borrowed heavily from the Romans, especially regarding siege warfare. Their battle tactics were highly flexible and were designed to meet the specific circumstances of a battle. The tactic of subterfuge was employed endlessly. Cavalry formed a large segment of an army,[citation needed] but troops readily dismounted to fight on foot. The Merovingians were capable of raising naval forces: the naval campaign waged against the Danes by Theuderic I in 515 involved ocean-worthy ships and rivercraft were used on the Loire, Rhône and Rhine. Culture In a modern linguistic context, the Germanic language of the early Franks is variously called "Old Frankish" or "Old Franconian" and these terms refer to the language of the Franks prior to the advent of the High German consonant shift, which took place in the 7th century. After this consonant shift the Frankish dialects diverged. The dialects which would become modern Dutch did not undergo the consonantal shift, while all others did so, to varying degrees, creating the so-called Rhenish fan pattern. As a result, the distinction between Old Dutch and Old Frankish is largely negligible, and "Old Dutch" (also called "Old Low Franconian") is the term used to distinguish the variants which were not affected by this Second Germanic consonant shift. The early Frankish language has not been directly attested, apart from a very small number of runic inscriptions found within contemporary Frankish territory such as the Bergakker inscription. Nevertheless, a significant amount of Frankish vocabulary has been reconstructed by examining early Germanic loanwords found in Old French as well as through comparative reconstruction through Dutch. The influence of Old Frankish on contemporary Gallo-Roman vocabulary and phonology, have long been the subjects of scholarly debate. Frankish influence is thought to include the designations of the four cardinal directions: nord "north", sud "south", est "east" and ouest "west" and at least an additional 1000 stem words. Although the Franks would eventually conquer all of Gaul, speakers of Frankish apparently expanded in sufficient numbers only into northern Gaul to have a linguistic effect. For several centuries, northern Gaul was a bilingual territory (Vulgar Latin and Frankish). The language used in writing, in government and by the Church was Latin. Urban T. Holmes has proposed that a Germanic language continued to be spoken as a second tongue by public officials in western Austrasia and Northern Neustria as late as the 850s, and that it completely disappeared as a spoken language during the 10th century from regions where only French is spoken today. The Germanic tribes who were called Franks in Late Antiquity are associated with the Weser–Rhine Germanic/Istvaeonic cultural-linguistic grouping. Early Frankish art and architecture belongs to a phase known as Migration Period art, which has left very few remains. The later period is called Carolingian art, or, especially in architecture, pre-Romanesque. Very little Merovingian architecture has been preserved. The earliest churches seem to have been timber-built, with larger examples being of a basilica type. The most completely surviving example, a baptistery in Poitiers, is a building with three apses of a Gallo-Roman style. A number of small baptistries can be seen in Southern France: as these fell out of fashion, they were not updated and have subsequently survived as they were. Jewelry (such as brooches), weapons (including swords with decorative hilts) and clothing (such as capes and sandals) have been found in a number of grave sites. The grave of Queen Aregund, discovered in 1959, and the Treasure of Gourdon, which was deposited soon after 524, are notable examples. The few Merovingian illuminated manuscripts that have survived, such as the Gelasian Sacramentary, contain a great deal of zoomorphic representations. Such Frankish objects show a greater use of the style and motifs of Late Antiquity and a lesser degree of skill and sophistication in design and manufacture than comparable works from the British Isles. So little has survived, however, that the best quality of work from this period may not be represented. The objects produced by the main centres of the Carolingian Renaissance, which represent a transformation from that of the earlier period, have survived in far greater quantity. The arts were lavishly funded and encouraged by Charlemagne, using imported artists where necessary, and Carolingian developments were decisive for the future course of Western art. Carolingian illuminated manuscripts and ivory plaques, which have survived in reasonable numbers, approached those of Constantinople in quality. The main surviving monument of Carolingian architecture is the Palatine Chapel in Aachen, which is an impressive and confident adaptation of San Vitale, Ravenna – from where some of the pillars were brought. Many other important buildings existed, such as the monasteries of Centula or St Gall, or the old Cologne Cathedral, since rebuilt. These large structures and complexes made frequent use of towers. Religion A sizeable portion of the Frankish aristocracy quickly followed Clovis in converting to Christianity (the Frankish church of the Merovingians). The conversion of all under Frankish rule required a considerable amount of time and effort. Echoes of Frankish paganism can be found in the primary sources, but their meaning is not always clear. Interpretations by modern scholars differ greatly, but it is likely that Frankish paganism shared most of the characteristics of other varieties of Germanic paganism. The mythology of the Franks was probably a form of Germanic polytheism. It was highly ritualistic. Many daily activities centred around the multiple deities, chiefest of which may have been the Quinotaur, a water-god from whom the Merovingians were reputed to have derived their ancestry. Most of their gods were linked with local cult centres and their sacred character and power were associated with specific regions, outside of which they were neither worshipped nor feared. Most of the gods were "worldly", possessing form and having connections with specific objects, in contrast to the God of Christianity. Frankish paganism has been observed in the burial site of Childeric I, where the king's body was found covered in a cloth decorated with numerous bees. There is a likely connection with the bees to the traditional Frankish weapon, the angon (meaning "sting"), from its distinctive spearhead. It is possible that the fleur-de-lis is derived from the angon. Some Franks, like the 4th century usurper Silvanus, converted early to Christianity. In 496, Clovis I, who had married a Burgundian Catholic named Clotilda in 493, was baptised by Saint Remi after a decisive victory over the Alemanni at the Battle of Tolbiac. According to Gregory of Tours, over three thousand of his soldiers were baptised with him. Clovis' conversion had a profound effect on the course of European history, for at the time the Franks were the only major Christianised Germanic tribe without a predominantly Arian aristocracy and this led to a naturally amicable relationship between the Catholic Church and the increasingly powerful Franks. Although many of the Frankish aristocracy quickly followed Clovis in converting to Christianity, the conversion of all his subjects was only achieved after considerable effort and, in some regions, a period of over two centuries. The Chronicle of St. Denis relates that, following Clovis' conversion, a number of pagans who were unhappy with this turn of events rallied around Ragnachar, who had played an important role in Clovis' initial rise to power. Although the text remains unclear as to the precise pretext, Clovis had Ragnachar executed. Remaining pockets of resistance were overcome region by region, primarily due to the work of an expanding network of monasteries. The Merovingian Church was shaped by both internal and external forces. It had to come to terms with an established Gallo-Roman hierarchy that resisted changes to its culture, Christianise pagan sensibilities and suppress their expression, provide a new theological basis for Merovingian forms of kingship deeply rooted in pagan Germanic tradition and accommodate Irish and Anglo-Saxon missionary activities and papal requirements. The Carolingian reformation of monasticism and church-state relations was the culmination of the Frankish Church. The increasingly wealthy Merovingian elite endowed many monasteries, including that of the Irish missionary Columbanus. The 5th, 6th and 7th centuries saw two major waves of hermitism in the Frankish world, which led to legislation requiring that all monks and hermits follow the Rule of St Benedict. The Church sometimes had an uneasy relationship with the Merovingian kings, whose claim to rule depended on a mystique of royal descent and who tended to revert to the polygamy of their pagan ancestors. Rome encouraged the Franks to slowly replace the Gallican Rite with the Roman rite. Laws As with other Germanic peoples, the laws of the Franks were memorised by "rachimburgs", who were analogous to the lawspeakers of Scandinavia. By the 6th century, when these laws first appeared in written form, two basic legal subdivisions existed: Salian Franks were subject to Salic law and Ripuarian Franks to Ripuarian law. The Salic legal code applied in the Neustrian area from the river Liger (Loire) to the Silva Carbonaria, a forest south of present-day Brussels. It represented the boundary of the original area of Frankish settlement, which Chlodio pushed past in the 5th century.. The Ripuarian law was apparently used on the other side of the Silva Carbonaria, in the older Frankish kingdoms. The Rhineland or "Ripuarian" Franks who lived near the stretch of the Rhine from roughly Mainz to Duisburg, the region of the city of Cologne, are often considered separately from the Salians, and sometimes in modern texts referred to as Ripuarian Franks. The Ravenna Cosmography suggests that Francia Renensis included the old civitas of the Ubii, in Germania II (Germania Inferior), but also the northern part of Germania I (Germania Superior), including Mainz. Like the Salians they appear in Roman records both as raiders and as contributors to military units. Unlike the Salii, there is no record of when, if ever, the empire officially accepted their residence within its borders. They eventually succeeded to hold the city of Cologne, and at some point seem to have acquired the name Ripuarians, which may have meant "river people". In any case a Merovingian legal code was called the Lex Ribuaria, but it probably applied in all the older Frankish lands, including the original Salian areas. Gallo-Romans south of the River Loire and the clergy remained subject to traditional Roman law. Germanic law was overwhelmingly concerned with the protection of individuals and less concerned with protecting the interests of the state. According to Michel Rouche, "Frankish judges devoted as much care to a case involving the theft of a dog as Roman judges did to cases involving the fiscal responsibility of curiales, or municipal councilors". See also References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:CS1_errors:_URL] | [TOKENS: 589] |
Category:CS1 errors: URL This is a tracking category for CS1 citations that have URI scheme and other URL errors. External links in Citation Style 1 and Citation Style 2 templates are made from two parts: the title (|title=, |chapter=, etc.) and the URL (|url=, |archive-url=, |chapter-url=, etc.). The |url= parameter and other URL parameters must begin with a supported URI scheme. The URI schemes http://, https://, and the protocol relative scheme // are most commonly used; irc://, ircs://, ftp://, news:, mailto: and gopher:// are also supported. The URL scheme and host are checked to ensure that they contain only Latin characters, certain (required) punctuation, and do not contain spaces. The URL may be protocol relative (begins with //). If there are no spaces and the URL is not protocol relative, then the scheme must comply with RFC 3986. Some URL domains are written with non-Latin characters. cs1|2 does not accept those kinds of URLs so they must be 'internationalized'. Online tools are available to internationalize URLs that are written in non-Latin scripts: Top- and second-level domain names are checked for proper form. Generally, top-level domain names must be two or more letters; second-level domain names must be two or more letters, digits, or hyphens (first and last character must be a letter or digit). Single-letter second-level domains are supported for: Third- and subsequent-level domain names are not checked. The path portion of the URL is not checked. There is an additional test for |archive-url=. The cs1|2 templates expect that |archive-url= will hold a unique URL for an archived snapshot of the source identified by |url= or |chapter-url= (or any of its aliases). This error message is emitted when the value assigned to |archive-url= is the same as the matching title or chapter URL. To resolve this error, ensure that: Pages with this error are automatically placed in Category:CS1 errors: URL.[a] Wikipedia Library link in <param> This error is reported when a URL-holding parameter has a URL that links to The Wikipedia Library. These urls include this text: When these sorts of URLs are encountered, Module:Citation/CS1 emits this error message and automatically sets |url-access=subscription because these URLs are not accessible by readers. To resolve this error, make sure that the value assigned to the URL parameter is not the Wikipedia Library URL but is the URL of the source. Pages with this error are automatically placed in Category:CS1 errors: URL.[a] Notes References Pages in category "CS1 errors: URL" The following 200 pages are in this category, out of approximately 2,469 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-130] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Nelly_Longarms] | [TOKENS: 217] |
Contents Nelly Longarms Nelly Longarms (or Nellie Longarms) is a hag and water spirit in English folklore who dwells at the bottom of deep ponds, rivers and wells. Like the Grindylow, Peg Powler and Jenny Greenteeth she will reach out with her long sinewy arms and drag children beneath the water if they get too close. She is regarded as a bogeyman figure who is invoked by parents to frighten children into appropriate behaviour. The legend finds its origins around St Margaret's Garth, Durham, England. Residents have reported sightings and strange sounds, especially at night, since the early 18th Century. Nelly Longarms must typically be invited into a property for her to drag children into the water, and most sightings of the spirit are at the threshold of properties, often heard slamming or opening doors. References This article about a legendary creature is a stub. You can help Wikipedia by adding missing information. This article relating to a European folklore is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mishnah_Berurah] | [TOKENS: 903] |
Contents Mishnah Berurah The Mishnah Berurah (Hebrew: משנה ברורה "Clear Teaching") is a work of halakha (Jewish law) by Rabbi Yisrael Meir Kagan (Poland, 1838–1933, also known as Chofetz Chaim). It is a commentary on Orach Chayim, the first section of the Shulchan Aruch which deals with laws of prayer, synagogue, Shabbat and holidays, summarizing the opinions of the Acharonim (post-Medieval rabbinic authorities) on that work. The title comes from Talmud Bavli Masechet Shabbat 138b-139a, "They will rove, seeking the word of the LORD, but they will not find it (Amos 8:12) -- they will not find clear teaching and clear law in one place." Contents The Mishnah Berurah is traditionally printed in 6 volumes alongside selected other commentaries. The work provides simple and contemporary explanatory remarks and citations to daily aspects of halakha. It is widely used as a reference and has mostly supplanted the Chayei Adam and the Aruch HaShulchan as the primary authority on Jewish daily living among Ashkenazi Jews, particularly those closely associated with haredi yeshivas. The Mishnah Berurah is accompanied by additional in-depth glosses called Be'ur Halakha, a reference section called Sha'ar Hatziyun (these two were also written by the Chofetz Chaim), and additional commentaries called Be'er Hagolah, Be'er Heitev, and Sha'arei Teshuvah. The Mishnah Berurah's "literary style can be described as follows: In relation to a given law of the Shulhan Aruch, he raises a particular case with certain peculiarities that may change the law; then, he enumerates the opinions of the Ahronim (the later authorities, of the 16th century and on) on that case, from the most lenient to the most stringent ; and finally, he decides between them.... Having displayed what we may call the "leniency-stringency spectrum", [he] actually offers the reader an array of conduct options from which he may pick the one that seems right for him. This choice is not altogether free, since [he] shows a clear inclination to one side of the spectrum - the stringent - and encourages the reader to follow it, but still, the soft language of the ruling suggests that if one follows the other side of the spectrum, the lenient, he will not sin, since there are trustworthy authorities that may back his choice." Not all of the Mishnah Brurah was written by Kagan: some parts were instead written by his son or various students, which accounts for the existence of several contradictions between different rulings in the text. Impact Mishnah Berurah has come to play a significant role in the study and practice of contemporary Ashkenazi Orthodox Jews. According to some, it is the "posek acharon" whose rulings are the last word on halachic issues it addresses. As such, the "yeshivish" community tends to follow its rulings almost exclusively. However, R' Yosef Eliyahu Henkin ruled that the Aruch haShulchan should be regarded as more authoritative for a number of reasons: it is the later of the two codes; it covers the entire Shulchan Aruch and not just Orach Chaim; it takes Jewish custom into account; it was written by a practicing rabbi who thus had more experience with halachic dilemmas. R' Moshe Feinstein also preferred the Aruch Hashulchan, for the last of these reasons. Indeed, on a number of key issues, common Orthodox practice does not follow the Mishnah Berurah's stringencies.[a] "Mishnah Berurah Yomit" is a daily study programme initiated by Vaad Daas Halacha and the Chofetz Chaim Heritage Foundation. The study program proceeds either on a 2½-year cycle ("Daf a Day") or a 5-year cycle ("Amud a Day") and includes a focus on each Yom Tov (festival) in the 30 preceding days. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-NASA-20141014-NJ-140] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Crusader_states] | [TOKENS: 22680] |
Contents Crusader states The Crusader states, or Outremer, were four Catholic polities established in the Levant region and southeastern Anatolia from 1098 to 1291. Following the principles of feudalism, the foundation for these polities was laid by the First Crusade, which was proclaimed by the Latin Church in 1095 to reclaim the Holy Land after it was lost to the 7th-century Muslim conquest. From north to south, they were: the County of Edessa (1098–1150), the Principality of Antioch (1098–1268), the County of Tripoli (1102–1289), and the Kingdom of Jerusalem (1099–1291). The three northern states covered an area in what is now southeastern Turkey, northwestern Syria, and northern Lebanon; the Kingdom of Jerusalem, the southernmost and most prominent state, covered an area in what is now Israel, Palestine, southern Lebanon, and western Jordan. The description "Crusader states" can be misleading, as from 1130 onwards, very few people among the Franks were Crusaders. Medieval and modern writers use the term "Outremer" as a synonym, derived from the French word for overseas. By 1098, the Crusaders' armed pilgrimage to Jerusalem was passing through the Syria region. Edessa, under the rule of Greek Orthodoxy, was subject to a coup d’état in which the leadership was taken over by Baldwin of Boulogne, and Bohemond of Taranto remained as the ruling prince in the captured city of Antioch. The siege of Jerusalem in 1099 resulted in a decisive Crusader victory over the Fatimid Caliphate, after which territorial consolidation followed, including the taking of Tripoli. In 1144, Edessa fell to the Zengid Turks, but the other three realms endured until the final years of the 13th century, when they fell to the Mamluk Sultanate of Egypt. The Mamluks captured Antioch in 1268 and Tripoli in 1289, leaving only the Kingdom of Jerusalem, which had been severely weakened by the Ayyubid Sultanate after the siege of Jerusalem in 1244. The Crusader presence in the Levant collapsed shortly thereafter, when the Mamluks captured Acre in 1291, ending the Kingdom of Jerusalem nearly 200 years after it was founded. With all four of the states defeated and annexed, the survivors fled to the Kingdom of Cyprus, which had been established by the Third Crusade. The study of the Crusader states in their own right, as opposed to being a sub-topic of the Crusades, began in 19th-century France as an analogy to the French colonial experience in the Levant, though this was rejected by 20th-century historians.[who?] Their consensus was that the Frankish population, as the Western Europeans were known at the time, lived as a minority society that was largely urban and isolated from the indigenous Levantine peoples, having separate legal and religious systems. The ancient Jewish communities that had survived and remained in the holy cities of Jerusalem, Tiberias, Hebron, and Safed since the Jewish–Roman wars and the destruction of the Second Temple were heavily persecuted in a pattern of rampant Christian antisemitism accompanying the Crusades. Outremer The terms Crusader states and Outremer (French: outre-mer, lit. 'overseas') describe the four feudal states established after the First Crusade in the Levant in around 1100: (from north to south) the County of Edessa, the Principality of Antioch, the County of Tripoli, and the Kingdom of Jerusalem. The term Outremer is of medieval origin, whilst modern historians use Crusader states and the term Franks for the European incomers. However, relatively few of the incoming Europeans took a Crusader oath. The Latin chronicles of the First Crusade, written in the early 12th century, call the Western Christians who came from Europe Franci irrespective of their ethnicity. Byzantine Greek sources use Φράγκοι Frangi and Arabic الإفرنجي al-Ifranji. Alternatively, the chronicles use Latini, or Latins. These medieval ethnonyms reflect that the settlers could be differentiated from the indigenous population by language and faith. The Franks were mainly French-speaking Roman Catholics, while the natives were mostly Arabic- or Greek-speaking Muslims, Christians of other denominations, and Jews. The Kingdom of Jerusalem extended over historical Palestine and at its greatest extent included some territory east of the Jordan River. The northern states covered what is now part of Syria, south-eastern Turkey, and Lebanon. These areas were historically called Syria (known to the Arabs as al-Sham) and Upper Mesopotamia. Edessa extended east beyond the Euphrates. In the Middle Ages the Crusader states were also called Syria or Syrie. From around 1115, the ruler of Jerusalem was styled 'king of the Latins in Jerusalem'. Historian Hans Eberhard Mayer believes this reflected that only Latins held complete political and legal rights in the kingdom, and that the major division in the society was not between the nobility and the common people but between the Franks and the indigenous peoples. Despite sometimes receiving homage from, and acting as regent for, the rulers of the other states; the king held no formalised overlord status, and those states remained legally outside the kingdom. Jews, Christians, and Muslims respected Palestine, known as the Holy Land, as an exceptionally sacred place. They all associated the region with the lives of the prophets of the Hebrew Bible. All the holy sites in Judaism were found there, including the remains of the Second Temple, destroyed by the Romans in 70 AD. The New Testament presents Palestine as the venue of the acts of Jesus and his Apostles. Islamic tradition described the region's principal city, Jerusalem, as the site of the Isra' and Mi'raj, Muhammad's miraculous night travel and ascension to Heaven. Places associated with holy people developed into shrines visited by pilgrims coming from faraway lands, often as an act of penance. The surge in Christian pilgrimage also inspired many Jews to return to the Holy Land. The Church of the Holy Sepulchre was built to commemorate Christ's crucifixion and resurrection in Jerusalem. The Church of the Nativity was thought to enclose his birthplace in Bethlehem. The Dome of the Rock and Al-Aqsa Mosque commemorated Muhammad's night journey. Although the most sacred places of devotion were in Palestine, there were also shrines in neighbouring Syria. As a borderland of the Muslim world, Syria was an important theatre of jihad, though enthusiasm for pursuing it had faded by the end of the 11th century. In contrast, the Catholic ideology of religious war quickly developed, culminating in the idea of crusades for lands claimed for Christianity. Background Most crusades came from what had been the Carolingian Empire around 800. The empire had disintegrated, and two loosely unified successor states had taken its place: the Holy Roman Empire—which encompassed Germany, part of northern Italy, and the neighbouring lands—and France. Germany was divided into duchies, such as Lower Lorraine and Saxony, and their dukes did not always obey the emperors. Northern Italy was even less united, divided into numerous de facto independent states, and the authority of the emperor was barely felt. Carolingian's western successor state, France, was not united either; the French kings only controlled a small central region directly. Counts and dukes ruled other regions, and some of them were remarkably wealthy and powerful—in particular, the dukes of Aquitaine and Normandy, and the counts of Anjou, Champagne, Flanders, and Toulouse. Western Christians and Muslims interacted mainly through warring or commerce. During the 8th and 9th centuries, the Muslims were on the offensive, and commercial contacts primarily enriched the Islamic world. Europe was rural and underdeveloped, offering little more than raw materials and slaves in return for spices, cloth, and other luxuries from the Middle East. Climate change during the Medieval Warm Period affected the Middle East and western Europe differently. In the east, it caused droughts, while in the west, it improved conditions for agriculture. Higher agricultural yields led to population growth and the expansion of commerce, and to the development of prosperous military and mercantile elites. In Catholic Europe, state and society were organised along feudal lines. Landed estates were customarily granted in fief—that is, in return for services that the grantee, or vassal, was to perform for the grantor, or lord. A vassal owed fealty to the lord and was expected to provide military aid and advice to him. Violence was endemic, and a class of mounted warriors, the knights, emerged. Many built castles, and their feuds brought much suffering to the unarmed population. The development of the knightly class coincided with the subjection of the formerly free peasantry into serfdom, but the connection between the two processes is unclear. As feudal lordships could be established by acquiring land, western aristocrats willingly launched offensive military campaigns, even against faraway territories. Catholic Europe's expansion in the Mediterranean began in the second half of the 11th century. Norman warlords conquered southern Italy from the Byzantines and ousted the Muslim rulers from Sicily; French aristocrats hastened to the Iberian Peninsula to fight the Moors of Al-Andalus; and Italian fleets launched pillaging raids against the north African ports. This shift of power especially benefited merchants from the Italian city-states of Amalfi, Genoa, Pisa, and Venice. They replaced the Muslim and Jewish middlemen in the lucrative trans-Mediterranean commerce, and their fleets became the dominant naval forces in the region. On the eve of the Crusades, after a thousand years of reputedly uninterrupted succession of popes, the papacy was Catholic Europe's oldest institution. The popes were seen as Saint Peter's successors, and their prestige was high. In the west, the Gregorian Reform reduced lay influence on church life and strengthened papal authority over the clergy. Eastern Christians continued to consider the popes as no more than one of the five highest ranking church leaders, titled patriarchs, and rejected the idea of papal supremacy. This opposition, along with differences in theology and liturgy, caused acrimonious disputes which escalated when a papal legate excommunicated the Ecumenical Patriarch of Constantinople in 1054. The patriarchs of Alexandria, Antioch, and Jerusalem sided with the ecumenical patriarch against the papacy, but the East–West Schism was not yet inevitable, and the Catholic and Orthodox Churches remained in full communion. The Gregorian Reform enhanced the popes' influence on secular matters. To achieve political goals, popes excommunicated their opponents, placed entire realms under interdict, and promised spiritual rewards to those who took up arms for their cause. In 1074 Pope Gregory VII even considered leading a military campaign against the Turks who had attacked Byzantine territories in Anatolia. Turkic migration permeated the Middle East from the 9th century. Muslim border raiders captured unconverted Turkic nomads in the Central Asian borderlands and sold them to Islamic leaders who used them as slave soldiers. These were known as ghilman or mamluk and were emancipated when converted to Islam. Mamluks were valued primarily because the link of their prospects to a single master generated extreme loyalty. In the context of Middle Eastern politics this made them more trustworthy than relatives.[a] Eventually, some mamluk descendants climbed the Muslim hierarchy to become king makers or even dynastic founders. In the mid-11th century, a minor clan of Oghuz Turks named Seljuks after the warlord Saljūq from Transoxiana, had expanded through Khurasan and on to Baghdad. There, Saljūq's grandson Tughril I was granted the title sultan—'power' in Arabic—by the Abbasid caliph. The caliphs kept their legitimacy and prestige, but the sultans held political power. Seljuk success was achieved by extreme violence. It brought disruptive nomadism to the sedentary society of the Levant and set a pattern followed by other nomadic Turkic clans such as the Danishmendids and the Artuqids. The Seljuk Empire was decentralised, polyglot, and multi-national. A junior Seljuk ruling a province as an appanage was titled malik, Arabic for king. Mamluk military commanders acting as tutors and guardians for young Seljuk princes held the position of atabeg ('father-commander'). If his ward held a province in appanage, the atabeg ruled it as regent for the underage malik. On occasion, the atabeg kept power after his ward reached the age of majority or died. The Seljuks adopted and strengthened the traditional iqta' system of the administration of state revenues. This system secured the payment of military commanders through granting them the right to collect the land tax in a well-defined territory, but it exposed the taxpayers to an absent lord's greed and to his officials' arbitrary actions. Although the Seljuk state worked when family ties and personal loyalty overlapped the leaders' personal ambitions, the lavish iqta' grants combined with rivalries between maliks, atabegs, and military commanders could lead to disintegration in critical moments. The region's ethnic and religious diversity led to alienation among the ruled populations. In Syria, the Seljuk Sunnis ruled indigenous Shias. In Cilicia and northern Syria, the Byzantines, Arabs, and Turks squeezed populations of Armenians. The Seljuks contested control of southern Palestine with Egypt, where Shia rulers ruled a majority Sunni populace through powerful viziers who were mainly Turkic or Armenian, rather than Egyptian or Arab. The Seljuks and the Fatimid Caliphate of Egypt hated each other, as the Seljuk saw themselves as defenders of the Sunni Abbasid Caliphate, and Fatimid Egypt was the chief Shi'ite power in Islam. The root of this was beyond cultural and racial conflict but originated in the splits within Islam following Muhammad's death. Sunnis supported a caliphal succession that began with one of his associates, Abu Bakr, while Shi'ites supported an alternative succession from his cousin and son-in-law, Ali. Islamic law granted the status of dhimmi, or protected peoples, to the People of the Book, like Christians and Jews. The dhimmi were second-class citizens, obliged to pay a special poll tax, the jizya, but they could practise their religion and maintain their own law courts. Theological, liturgical, and cultural differences had given rise to the development of competing Christian denominations in the Levant before the 7th-century Muslim conquest. The Greek Orthodox natives, or Melkites, remained in full communion with the Byzantine imperial church, and their religious leaders often came from the Byzantine capital, Constantinople. In the 5th century, the Nestorians and the Monophysite Jacobites, Armenians, and Copts, broke with the Byzantine state church. The Maronites' separate church organisation emerged under Muslim rule. During the late 10th and early 11th centuries, the Byzantine Empire had been on the offensive, recapturing Antioch in 969 after three centuries of Arab rule and invading Syria. Turkic brigands and their Byzantine (often ethnically Turkic) counterparts called akritai indulged in cross-border raiding. In 1071, while securing his northern borders during a break in his campaigns against the Fatimid Caliphate, Sultan Alp Arslan defeated Byzantine Emperor Romanos IV Diogenes at the Battle of Manzikert. Romanos' capture and Byzantine factionalism that followed broke Byzantine border control. This enabled large numbers of Turkic warbands and nomadic herders to enter Anatolia. Alp Arslan's cousin Suleiman ibn Qutulmish seized Cilicia and entered Antioch in 1084. Two years later, he was killed in a conflict with the Seljuk Empire. Between 1092 and 1094, Nizam al-Mulk, the Sultan Malik-Shah, the Fatimid Caliph Al-Mustansir Billah, and the vizier Badr al-Jamali all died. Malik-Shah's brother Tutush and the atabegs of Aleppo and Edessa were killed in the succession conflict, and Suleiman's son Kilij Arslan I revived his father's Sultanate of Rum in Anatolia. The Egyptian succession resulted in a split in the Ismā'īlist branch of Shia Islam. The Persian missionary Hassan-i Sabbah led a breakaway group, creating the Nizari branch of Isma'ilism. This was known as the New Preaching in Syria and the Order of Assassins in western historiography. The order used targeted murder to compensate for their lack of military power. The Seljuk invasions, the subsequent eclipse of the Byzantines and Fatimids, and the disintegration of the Seljuk Empire revived the old Levantine system of city-states. The region had always been highly urbanised, and the local societies were organised into networks of interdependent settlements, each centred around a city or a major town. These networks developed into autonomous lordships under the rule of a Turkic, Arab or Armenian warlord or town magistrate in the late 11th century. The local quadis took control of Tyre and Tripoli, the Arab Banu Munqidh seized Shaizar, and Tutush's sons Duqaq and Ridwan succeeded in Damascus and Aleppo respectively, but their atabegs, Janah ad-Dawla and Toghtekin were in control. Ridwan's retainer Sokman ben Artuq held Jerusalem; Ridwan's father-in-law, Yağısıyan, ruled Antioch; and a warlord representing Byzantine interests, called Thoros, seized Edessa. During this period the old Islamic conflict between Sunni and Shia made the Muslim peoples more likely to wage war on each other than on Christians. History The Byzantines augmented their armies with mercenaries from the Turks and Europe. This compensated for a shortfall caused by lost territory, especially in Anatolia. In 1095 at the Council of Piacenza, Emperor Alexios I Komnenos requested support from Pope Urban II against the Seljuk threat. What the emperor probably had in mind was a relatively modest force, and Urban far exceeded his expectations by calling for the First Crusade at the later Council of Clermont. He developed a doctrine of bellum sacrum (Christian holy war) and—based mainly on Old Testament passages in which God leads the Hebrews to victory in war—reconciled this with Church teachings. Urban's call for an armed pilgrimage for the liberation of the Eastern Christians and the recovery of the Holy Land aroused unprecedented enthusiasm in Catholic Europe. Within a year, tens of thousands of people, both commoners and aristocrats, departed for the military campaign. Individual Crusaders' motivations to join the crusade varied, but some of them probably left Europe to make a permanent home in the Levant. Alexios cautiously welcomed the feudal armies commanded by western nobles. By dazzling them with wealth and charming them with flattery, Alexios extracted oaths of fealty from most of the Crusader commanders. As his vassals, Godfrey of Bouillon, nominally duke of Lower Lorraine; the Italo-Norman Bohemond of Taranto; Bohemond's nephew Tancred of Hauteville; and Godfrey's brother Baldwin of Bologne all swore that any territory gained which the Roman Empire had previously held, would be handed to Alexios' Byzantine representatives. Only Raymond IV, Count of Toulouse, refused this oath, instead promising non-aggression towards Alexios. The Byzantine general Tatikios guided the crusade on the arduous three-month march to besiege Antioch, during which the Franks made alliances with local Armenians. Before reaching Antioch, Baldwin and his men left the main army and headed to the Euphrates river, engaging in local politics and seizing the fortifications of Turbessel and Rawandan, where the Armenian populace welcomed him. Thoros, then ruler of this territory, could barely control or defend Edessa, so he tried to hire the Franks as mercenaries. Later, he went further and adopted Baldwin as his son in a power-share arrangement. In March 1098, a month after Baldwin's arrival, a Christian mob killed Thoros and acclaimed Baldwin as doux, the Byzantine title Thoros had used. Baldwin's position was personal rather than institutional, and the Armenian governance of the city remained in place. Baldwin's nascent County of Edessa consisted of pockets separated from his other holdings of Turbessel, Rawandan and Samosata by the territory of Turkic and Armenian warlords and the Euphrates. As the Crusaders marched towards Antioch, Syrian Muslims asked Sultan Barkiyaruq for help, but he was engaged in a power struggle with his brother Muhammad Tapar. At Antioch, Bohemond persuaded the other leaders the city should be his if he could capture it, and Alexios did not come to claim it. Alexios withdrew rather than join the siege after Stephen, Count of Blois (who was deserting) told him defeat was imminent. In June 1098, Bohemond persuaded a renegade Armenian tower commander to let the crusaders into the city. They slaughtered the Muslim inhabitants and, by mistake, some local Christians. The crusade leaders decided to return Antioch to Alexios as they had sworn to at Constantinople, but when they learnt of Alexios' withdrawal, Bohemond claimed the city for himself. The other leaders agreed—apart from Raymond, who supported the Byzantine alliance. This dispute resulted in the march stalling in north Syria. The Crusaders were becoming aware of the chaotic state of Muslim politics through frequent diplomatic relations with the Muslim powers. Raymond indulged in a small expedition. He bypassed Shaizar and laid siege to Arqa to enforce the payment of a tribute. In Raymond's absence, Bohemond expelled Raymond's last troops from Antioch and consolidated his rule in the developing Principality of Antioch. Under pressure from the poorer Franks, Godfrey and Robert II, Count of Flanders, reluctantly joined the unsuccessful siege of Arqa. Alexios asked the crusade to delay the march to Jerusalem, so the Byzantines could assist. Raymond's support for this strategy increased division among the crusade leaders and damaged his reputation among ordinary Crusaders. The Crusaders marched along the Mediterranean coast to Jerusalem. On 15 July 1099 Crusaders took the city after a siege lasting barely longer than a month. Thousands of Muslims and Jews were killed, and the survivors were sold into slavery. Proposals to govern the city as an ecclesiastical state were rejected. Raymond refused the royal title, claiming only Christ could wear a crown in Jerusalem. This may have been to dissuade the more popular Godfrey from assuming the throne, but Godfrey adopted the title Advocatus Sancti Sepulchri ('Defender of the Holy Sepulchre') when he was proclaimed the first Frankish ruler of Jerusalem. In Western Europe an advocatus was a layman responsible for the protection and administration of church estates. The foundation of these three Crusader states did not change the political situation in the Levant profoundly. Frankish rulers replaced local warlords in the cities, but large-scale colonisation did not follow, and the conquerors did not change the traditional organisation of settlements and property in the countryside. The Muslim leaders were massacred or forced into exile, and the natives, accustomed to the rule of well-organised warbands, offered little resistance to their new lords. Western Christianity's canon law recognised that peace treaties and armistices between Christians and Muslims were valid. The Frankish knights regarded the Turkic mounted warlords as their peers with familiar moral values, and this familiarity facilitated their negotiations with the Muslim leaders. The conquest of a city was often accompanied by a treaty with the neighbouring Muslim rulers who were customarily forced to pay a tribute for the peace. The Crusader states had a special position in Western Christianity's consciousness: many Catholic aristocrats were ready to fight for the Holy Land, although in the decades following the destruction of the large Crusade of 1101 in Anatolia, only smaller groups of armed pilgrims departed for Outremer. The Fatimids' feud with the Seljuks hindered Muslim actions for more than a decade. Outnumbered by their enemies, the Franks remained in a vulnerable position, but they could forge temporary alliances with their Armenian, Arab, and Turkic neighbours. Each Crusader state had its own strategic purpose during the first years of its existence. Jerusalem needed undisturbed access to the Mediterranean; Antioch wanted to seize Cilicia and the territory along the upper course of the Orontes River; and Edessa aspired to control the Upper Euphrates valley. The most powerful Syrian Muslim ruler, Toghtekin of Damascus, took a practical approach to dealing with the Franks. His treaties establishing Damascene–Jerusalemite condominiums (shared rule) in debated territories created precedents for other Muslim leaders. In August 1099, Godfrey defeated the Fatimid Vizier Al-Afdal Shahanshah at the Battle of Ascalon. When Daimbert of Pisa, the papal legate, arrived in the Levant with 120 ships, Godfrey gained much-needed naval support by backing him for the Patriarchate of Jerusalem, as well as granting him parts of Jerusalem and the Pisans a section of the port of Jaffa. Daimbert revived the idea of creating an ecclesiastic principality and extracted oaths of fealty from Godfrey and Bohemond. When Godfrey died in 1100, his retainers occupied the Tower of David to secure his inheritance for his brother Baldwin. Daimbert and Tancred sought Bohemond's help against the Lotharingians, but the Danishmends captured Bohemond under Gazi Gümüshtigin while securing Antioch's northern marches. Before departing for Jerusalem, Baldwin ceded Edessa to his cousin, Baldwin of Bourcq. His arrival thwarted Daimbert, who crowned Baldwin as Jerusalem's first Latin king on Christmas Day 1100. By performing the ceremony, the patriarch abandoned his claim to rule the Holy Land. Tancred remained defiant to Baldwin until an Antiochene delegation offered him the regency in March 1101. He ceded his Principality of Galilee to the king but reserved the right to reclaim it as a fief if he returned from Antioch within 15 months. For the next two years, Tancred ruled Antioch. He conquered Byzantine Cilicia and parts of Syria. The Fatimid Caliphate attacked Jerusalem in 1101, 1102 and 1105, on the last occasion in alliance with Toghtekin. Baldwin I repulsed these attacks and with Genoese, Venetian, and Norwegian fleets conquered all the towns on the Palestinian coast except Tyre and Ascalon. Raymond laid the foundations of the fourth Crusader state, the County of Tripoli. He captured Tartus and Gibelet and besieged Tripoli. His cousin William II Jordan continued the siege after Raymond's death in 1105. It was completed in 1109 when Raymond's son Bertrand arrived. Baldwin brokered a deal, sharing the territory between them, until William Jordan's death united the county. Bertrand acknowledged Baldwin's suzerainty, although William Jordan had been Tancred's vassal. When Bohemond was released for a ransom in 1103, he compensated Tancred with lands and gifts. Baldwin of Bourcq and his cousin and vassal, Joscelin of Courtenay, were captured while attacking Ridwan of Aleppo at Harran with Bohemond. Tancred assumed the regency of Edessa. The Byzantines took the opportunity to reconquer Cilicia. They took the port but not the citadel of Laodikeia. Bohemond returned to Italy to recruit allies and gather supplies. Tancred assumed leadership in Antioch, and his cousin Richard of Salerno did the same in Edessa. In 1107, Bohemond crossed the Adriatic Sea and failed in besieging Dyrrachion in the Balkan Peninsula. The resulting Treaty of Devol forced Bohemond to restore Laodikeia and Cilicia to Alexios, become his vassal and reinstate the Greek patriarch of Antioch. Bohemond never returned. He died, leaving an underage son Bohemond II. Tancred continued as regent of Antioch and ignored the treaty. Richard's son, Roger of Salerno, succeeded as regent on Tancred's death in 1112. The fall of Tripoli prompted Sultan Muhammad Tapar to appoint the atabeg of Mosul, Mawdud, to wage jihad against the Franks. Between 1110 and 1113, Mawdud mounted four campaigns in Mesopotamia and Syria, but rivalry among his heterogeneous armies' commanders forced him to abandon the offensive on each occasion. As Edessa was Mosul's chief rival, Mawdud directed two campaigns against the city. They caused havoc, and the county's eastern region could never recover. The Syrian Muslim rulers saw the sultan's intervention as a threat to their autonomy and collaborated with the Franks. After an assassin, likely a Nizari, murdered Mawdud, Muhammad Tapar dispatched two armies to Syria, but both campaigns failed. As Aleppo remained vulnerable to Frankish attacks, the city leaders sought external protection. They allied with the adventurous Artuqid princes, Ilghazi and Balak, who inflicted crucial defeats on the Franks between 1119 and 1124 but could rarely prevent Frankish counter-invasions. In 1118 Baldwin of Bourcq succeeded Baldwin I as King of Jerusalem, naming Joscelin his successor in Edessa. After Roger was killed at Ager Sanguinis ('Field of Blood'), Baldwin II assumed the regency of Antioch for the absent Bohemond II. Public opinion attributed a series of disasters affecting the Outremer—defeats by enemy forces and plagues of locusts—as punishments for the Franks' sins. To improve moral standards, the Jerusalemite ecclesiastic and secular leaders assembled a council at Nablus and issued decrees against adultery, sodomy, bigamy, and sexual relations between Catholics and Muslims. A proposal by a group of pious knights about a monastic order for deeply religious warriors was likely first discussed at the council of Nablus. Church leaders quickly espoused the idea of armed monks, and within a decade, two military orders, the Knights Templar and Hospitaller, were formed. As the Fatimid Caliphate no longer posed a major threat to Jerusalem, but Antioch and Edessa were vulnerable to invasion, the defence of the northern Crusader states took much of Baldwin II's time. His absence, its impact on government, and his placement of relatives and their vassals in positions of power created opposition in Jerusalem. Baldwin's sixteen-month captivity led to a failed deposition attempt by some of the nobility, with the Flemish count Charles the Good considered as a possible replacement. Charles declined the offer. Baldwin had four daughters. In 1126, Bohemond reached the age of majority and married the second-oldest, Alice, in Antioch. Aleppo had plunged into anarchy, but Bohemond II could not exploit this because of a conflict with Joscelin. The new atabeg of Mosul Imad al-Din Zengi seized Aleppo in 1128. The two major Muslim centres' union was especially dangerous for the neighbouring Edessa, but it also worried Damascus's new ruler, Taj al-Muluk Buri. Baldwin's eldest daughter Melisende was his heir. He married her to Fulk of Anjou, who had widespread western connections useful to the kingdom. After Fulk's arrival, Baldwin raised a large force for an attack on Damascus. This force included the leaders of the other Crusader states, and a significant Angevin contingent was provided by Fulk. The campaign was abandoned when the Franks' foraging parties were destroyed, and bad weather made the roads impassable. In 1130 Bohemond II was killed raiding in Cilicia, leaving Alice with their infant daughter, Constance. Baldwin II denied Alice control, instead resuming the regency until his death in 1131. On his deathbed Baldwin named Fulk, Melisende, and their infant son Baldwin III joint heirs. Fulk intended to revoke the arrangement, but his favouritism toward his compatriots roused strong discontent in the kingdom. In 1134, he repressed a revolt by Hugh II of Jaffa, a relative of Melisende, but was still compelled to accept the shared inheritance. He also thwarted frequent attempts by his sister-in-law Alice to assume the regency in Antioch, including alliances with Pons of Tripoli and Joscelin II of Edessa. Taking advantage of Antioch's weakened position, Leo, a Cilician Armenian ruler, seized the Cilician plain. In 1133, the Antiochene nobility asked Fulk to propose a husband for Constance, and he selected Raymond of Poitiers, a younger son of William IX of Aquitaine. Raymond finally arrived in Antioch three years later and married Constance. He reconquered parts of Cilicia from the Armenians. In 1137, Pons was killed battling the Damascenes, and Zengi invaded Tripoli. Fulk intervened, but Zengi's troops captured Pons' successor Raymond II, and besieged Fulk in the border castle of Montferrand. Fulk surrendered the castle and paid Zengi 50,000 dinars for his and Raymond's freedom. Emperor Alexios' son and successor, John II Komnenos, reasserted Byzantine claims to Cilicia and Antioch. His military campaign compelled Raymond of Poitiers to give homage and agree that he would surrender Antioch by way of compensation if the Byzantines ever captured Aleppo, Homs, and Shaizar for him. The following year the Byzantines and Franks jointly besieged Aleppo and Shaizar but could not take the towns. Zengi soon seized Homs from the Damascenes, but a Damascene–Jerusalemite coalition prevented his southward expansion. Joscelin made an alliance with the Artuqid Kara Arslan, who was Zengi's principal Muslim rival in Upper Mesopotamia. While Joscelin was staying west of the Euphrates at Turbessel, Zengi invaded the Frankish lands east of the river in late 1144. Before the end of the year, he captured the region, including the city of Edessa. Losing Edessa strategically threatened Antioch and limited opportunities for a Jerusalemite expansion in the south. In September 1146, Zengi was assassinated, possibly on orders from Damascus. His empire was divided between his two sons, with the younger Nur ad-Din succeeding him in Aleppo. A power vacuum in Edessa allowed Joscelin to return to the city, but he was unable to take the citadel. When Nur ad-Din arrived, the Franks were trapped, Joscelin fled and the subsequent sack left the city deserted. The fall of Edessa shocked Western opinion, prompting the largest military response since the First Crusade. The new crusade consisted of two great armies led overland by Louis VII of France and Conrad III of Germany, arriving in Acre in 1148. The arduous march had greatly reduced the two rulers' forces. At a leadership conference, including the widowed Melisende and her son Baldwin III, they agreed to attack Damascus rather than attempt to recover distant Edessa. The attack on Damascus ended in a humiliating defeat and retreat. Scapegoating followed the unexpected failure, with many westerners blaming the Franks. Fewer crusaders came from Europe to fight for the Holy Land in the next decades. Raymond of Poitiers joined forces with the Nizari and Joscelin with the Rum Seljuks against Aleppo. Nur ad-Din invaded Antioch and Raymond was defeated and killed at Inab in 1149. The next year Joscelin was captured and tortured and later died. Beatrice of Saone, his wife, sold the remains of the County of Edessa to the Byzantines with Baldwin's consent. Already 21 and eager to rule alone, Baldwin forced Melisende's retirement in 1152. In Antioch, Constance resisted pressure to remarry until 1153 when she chose the French nobleman Raynald of Châtillon as her second husband. In the 1150s, Manuel successfully asserted Byzantine suzerainty over Edessa and Antioch, and a series of Byzantine-Crusader marriage alliances sealed the dependence of the Crusader states on the Empire. However, Manuel's 1157-58 expedition would be the last time a full imperial army would campaign in Syria. From 1149, all Fatimid caliphs were children, and military commanders were competing for power. Ascalon, the Fatimids' last Palestinian bridgehead, hindered Frankish raids against Egypt, but Baldwin captured the town in 1153. The Damascenes feared further Frankish expansion, and Nur ad-Din seized the city with ease a year later. He continued to remit the tribute that Damascus' former rulers had offered to the Jerusalemite kings. Baldwin extracted tribute from the Egyptians as well. Raynald lacked financial resources. He tortured the Latin Patriarch of Antioch, Aimery of Limoges, to appropriate his wealth and attacked the Byzantine's Cilician Armenians. When Emperor Manuel I Komnenos delayed the payment he had been promised, Raynald pillaged Byzantine Cyprus. Thierry, Count of Flanders, brought military strength from the West for campaigning. Thierry, Baldwin, Raynald and Raymond III of Tripoli attacked Shaizar. Baldwin offered the city to Thierry, who refused Raynald's demands he become his vassal, and the siege was abandoned. After Nur ad-Din seized Shaizar in 1157, the Nizari remained the last independent Muslim power in Syria. As prospects for a new crusade from the West were poor, the Franks of Jerusalem sought a marriage alliance with the Byzantines. Baldwin married Manuel's niece, Theodora, and received a significant dowry. With his consent, Manuel forced Raynald into accepting Byzantine overlordship. The childless Baldwin III died in 1163. His younger brother Amalric had to repudiate his wife Agnes of Courtenay on grounds of consanguinity before his coronation, but the right of their two children, Baldwin IV and Sibylla, to inherit the kingdom was confirmed. The Fatimid Caliphate had rival viziers, Shawar and Dirgham, both eager to seek external support. This gave Amalric and Nur ad-Din the opportunity to intervene. Amalric launched five invasions of Egypt between 1163 and 1169, on the last occasion cooperating with a Byzantine fleet, but he could not establish a bridgehead. Nur ad-Din appointed his Kurdish general Shirkuh to direct the military operations in Egypt. Weeks before Shirkuh died in 1169, the Fatimid caliph Al-Adid made him vizier. His nephew Saladin, who ended the Shi'ite caliphate when Al-Adid died in September 1171, succeeded Shirkuh. In March 1171, Amalric undertook a visit to Manuel in Constantinople to get Byzantine military support for yet another attack on Egypt. To this end, he swore fealty to the Emperor before his return to Jerusalem, but conflicts with Venice and Sicily prevented the Byzantines from campaigning in the Levant. In theory, Saladin was Nur ad-Din's lieutenant, but mutual distrust hindered their cooperation against the crusader states. As Saladin remitted suspiciously small revenue payments to him, Nur ad-Din began gathering troops for an attack on Egypt, but he died in May 1174. He left an 11-year-old son, As-Salih Ismail al-Malik. Within two months, Amalric died. His son and successor, Baldwin IV, was 13 and a leper. The accession of underage rulers led to disunity both in Jerusalem and in Muslim Syria. In Jerusalem, the seneschal Miles of Plancy took control, but unknown assailants murdered him on the streets of Acre. With the baronage's consent, Amalric's cousin, Raymond III of Tripoli, assumed the regency for Baldwin IV as bailli. He became the most powerful baron by marrying Eschiva of Bures, the richest heiress of the kingdom, and gaining Galilee. Nur ad-Din's empire quickly disintegrated. His eunuch confidant Gümüshtekin took As-Salih from Damascus to Aleppo. Gümüshtekin's rival, Ibn al-Muqaddam, seized Damascus but soon surrendered it to Saladin. By 1176, Saladin reunited much of Muslim Syria through warring against Gümüshtekin and As-Salih's relatives, the Zengids. That same year, Emperor Manuel invaded the Sultanate of Rum to reopen the Anatolian pilgrimage route towards the Holy Land. His defeat at Myriokephalon weakened the Byzantines' hold on Cilicia. Upholding the balance of power in Syria was apparently Raymond's main concern during his regency. When Saladin besieged Aleppo in 1174, Raymond led a relief army to the city; next year, when a united Zengid army invaded Saladin's realm, he signed a truce with Saladin. Gümüshtekin released Raynald of Châtillon and Baldwin's maternal uncle, Joscelin III of Courtenay, for a large ransom. They hastened to Jerusalem, and Raynald seized Oultrejourdain by marrying Stephanie of Milly. As Baldwin, a leper, was not expected to father children, his sister's marriage was to be arranged before his inevitable premature death from the disease. His regent, Raymond, chose William of Montferrat for Sybilla's husband. William was the cousin of both Holy Roman Emperor Frederick Barbarossa and Louis VII of France. In 1176, Baldwin reached the age of 15 and majority, ending Raymond's regency. He revisited plans for an invasion of Egypt and renewed his father's pact with the Byzantines. Manuel dispatched a fleet of 70 galleys plus support ships to Outremer. As William had died, and Baldwin's health was deteriorating, the Franks offered the regency and the Egyptian invasion's command to Baldwin's crusader cousin Philip I, Count of Flanders. He wanted to be free to return to Flanders and rejected both offers. The plan for the invasion was abandoned, and the Byzantine fleet sailed for Constantinople. Baldwin negotiated a marriage between Hugh III, Duke of Burgundy, and Sibylla, but the succession crisis in France prevented him from sailing. Tension between Baldwin's maternal and paternal relatives grew. When Raymond and Bohemond, both related to him on his father's side, came to Jerusalem unexpectedly before Easter in 1180, Baldwin panicked, fearing they had arrived to depose him and elevate Sibylla to the throne under their control. To thwart their coup, he sanctioned her marriage to Guy of Lusignan, a young aristocrat from Poitou. Guy's brother Aimery held the office of constable of Jerusalem, and their family had close links to the House of Plantagenet. Baldwin's mother and her clique marginalised Raymond, Bohemond and the influential Ibelin family. To prepare for a military campaign against the Seljuks of Rum, Saladin concluded a two-year truce with Baldwin and, after launching a short but devastating campaign along the coast of Tripoli, with Raymond. For the first time in the history of Frankish–Muslim relations, the Franks could not set conditions for the peace. Between 1180 and 1183, Saladin asserted his suzerainty over the Artuqids, concluded a peace treaty with the Rum Seljuks, seized Aleppo from the Zengids and re-established the Egyptian navy. Meanwhile, after the truce expired in 1182, Saladin demonstrated the strategic advantage he had by holding both Cairo and Damascus. While he faced Baldwin in Oultrejordain, his troops from Syria pillaged Galilee. The Franks adopted a defensive tactic and strengthened their fortresses. In February 1183, a Jerusalemite assembly levied an extraordinary tax for defence funding. Raynald was the sole Frankish ruler to pursue an offensive policy. He attacked an Egyptian caravan and built a fleet for a naval raid into the Red Sea. Byzantine influence declined after Manuel died in 1180. Bohemond repulsed his Byzantine wife Theodora and married Sybil, an Antiochene noblewoman with a bad reputation. Patriarch Aimery excommunicated him and the Antiochene nobles who opposed the marriage fled to the Cilician Armenian prince, Ruben III. Saladin granted a truce to Bohemond and made preparations for an invasion of Jerusalem where Guy took command of the defence. When Saladin invaded Galilee, the Franks responded with what William of Tyre described in his contemporaneous chronicle as their largest army in living memory but avoided fighting a battle. After days of fierce skirmishing, Saladin withdrew towards Damascus. Baldwin dismissed Guy from his position as bailli, apparently because Guy had proved unable to overcome factionalism in the army. In November 1183, Baldwin made Guy's five-year-old stepson, also called Baldwin, co-ruler, and had him crowned king while attempting to annul the marriage of Guy and Sibylla. Guy and Sibylla fled to Ascalon, and his supporters vainly intervened on their behalf at a general council. An embassy to Europe was met with offers of money but not of military support. Already dying, Baldwin IV appointed Raymond bailli for 10 years, but charged Joscelin with the ailing Baldwin V's guardianship. As there was no consensus on what should happen if the boy king died, it would be for the pope, the Holy Roman Emperor, the kings of France and England to decide whether his mother Sibylla or her half-sister Isabella had stronger claim to the throne. Bohemond was staying at Acre around this time, allegedly because Baldwin IV wanted to secure Bohemond's support for his decisions on the succession. Back in Antioch, Bohemond kidnapped Ruben of Cilicia and forced him into becoming his vassal. Saladin signed a four-year truce with Jerusalem and attacked Mosul. He could not capture the city but extracted an oath of fealty from Mosul's Zengid ruler, Izz al-Din Mas'ud, in March 1186. A few months later, Baldwin V died, and a power struggle began in Jerusalem. Raymond summoned the barons to Nablus to a general council. In his absence, Sybilla's supporters, led by Joscelin and Raynald, took full control of Jerusalem, Acre and Beirut. Patriarch Heraclius of Jerusalem crowned her queen and appointed Guy her co-ruler. The barons assembling at Nablus offered the crown to Isabella's husband Humphrey IV of Toron, but he submitted to Sybilla to avoid a civil war. After his desertion, all the barons but Baldwin of Ibelin and Raymond swore fealty to the royal couple. Baldwin went into exile, and Raymond forged an alliance with Saladin. Raynald seized another caravan, which violated the truce and prompted Saladin to assemble his forces for the jihād. Raymond allowed Muslim troops to pass through Galilee to raid around Acre. His shock at the Frankish defeat in the resulting Battle of Cresson brought him to reconciliation with Guy. Guy now gathered a large force, committing all of his kingdom's available resources. The leadership divided on tactics. Raynald urged an offensive, while Raymond proposed defensive caution, although Saladin was besieging his castle at Tiberias. Guy decided to deal with the siege. The march towards Tiberias was arduous, and Saladin's troops overwhelmed the exhausted Frankish army at the Horns of Hattin on 4 July 1187. Hattin was a massive defeat for the Franks. Nearly all the major Frankish leaders were taken prisoner, but only Raynald and the armed monks of the military orders were executed. Raymond was among the few Frankish leaders who escaped captivity. He fell seriously ill after reaching Tripoli. Within months after Hattin, Saladin conquered almost the entire kingdom. The city of Jerusalem surrendered on 2 October 1187. There were no massacres following the conquest, but tens of thousands of Franks were enslaved. Those who could negotiate a free passage or were ransomed swarmed to Tyre, Tripoli, or Antioch. Conrad of Montferrat commanded the defences of Tyre. He was William's brother and arrived only days after Hattin. The childless Raymond died, and Bohemond's younger son, also called Bohemond, assumed power in Tripoli. After news of the Franks' devastating defeat at Hattin reached Italy, Pope Gregory VIII called for a new crusade. Passionate sermons raised religious fervour, and it is likely that more people took the crusader oath than during recruitment for the previous crusades. Bad weather and growing discontent among his troops forced Saladin to abandon the siege of Tyre and allow his men to return to Iraq, Syria, and Egypt early in 1188. In May, Saladin turned his attention to Tripoli and Antioch. The arrival of William II of Sicily's fleet saved Tripoli. Saladin released Guy on the condition that he go overseas and never bear arms against him. Historian Thomas Asbridge proposes that Saladin likely anticipated that a power struggle between Guy and Conrad was inevitable and it could weaken the Franks. Indeed, Guy failed to depart for Europe. In October, Bohemond asked Saladin for a seven-month truce, offering to surrender the city of Antioch if help did not arrive. Saladin's biographer Ali ibn al-Athir wrote, after the Frankish castles were starved into submission, that "the Muslims acquired everything from as far as Ayla to the furthest districts of Beirut with only the interruption of Tyre and also all the dependencies of Antioch, apart from al-Qusayr". Guy of Lusignan, his brother Aimery, and Gerard de Ridefort, grand master of the Templars, gathered about 600 knights in Antioch. They approached Tyre, but Conrad of Montferrat refused them entry, convinced Guy had forfeited his claim to rule when Saladin conquered his kingdom. Guy and his comrades knew western crusaders would arrive soon and risked a token move on Acre in August 1189. Crusader groups from many parts of Europe joined them. Their tactic surprised Saladin and prevented him from resuming the invasion of Antioch. Three major crusader armies departed for the Holy Land in 1189–1190. Frederick Barbarossa's crusade ended abruptly in June 1190 when he drowned in the Saleph River in Anatolia. Only fragments of his army reached Outremer. Philip II of France landed at Acre in April 1191, and Richard I of England arrived in May. During his voyage, Richard had seized Cyprus from the island's self-declared emperor Isaac Komnenos. Guy and Conrad had reconciled, but their conflict returned when Sybilla of Jerusalem and her two daughters by Guy died. Conrad married the reluctant Isabella, Sybilla's half-sister and heir, despite her marriage to Humphrey of Toron, and gossip about his two living wives. After an attritional siege, the Muslim garrison surrendered Acre, and Philip and most of the French army returned to Europe. Richard led the crusade to victory at Arsuf, capturing Jaffa, Ascalon and Darum. Internal dissension forced Richard to abandon Guy and accept Conrad's kingship. Guy was compensated with possession of Cyprus. In April 1192, Conrad was assassinated in Tyre. Within a week, the widowed Isabella was married to Henry, Count of Champagne. Saladin did not risk a defeat in a pitched battle, and Richard feared the exhausting march across arid lands towards Jerusalem. As he fell ill and needed to return home to attend to his affairs, a three-year truce was agreed in September 1192. The Franks kept land between Tyre and Jaffa, but dismantled Ascalon; Christian pilgrimages to Jerusalem were allowed. Frankish confidence in the truce was not high. In April 1193, Geoffroy de Donjon, head of the Knights Hospitaller, wrote in a letter, 'We know for certain that since the loss of the land the inheritance of Christ cannot easily be regained. The land held by the Christians during the truces remains virtually uninhabited.' The Franks' strategic position was not necessarily detrimental: they kept the coastal towns and their frontiers shortened. Their enclaves represented a minor threat to the Ayyubids' empire in comparison with the Artuqids, Zengids, Seljuks of Rum, Cilician Armenians or Georgians in the north. After Saladin died in March 1193, none of his sons could assume authority over his Ayyubid relatives, and the dynastic feud lasted for almost a decade. The Ayyubids agreed near-constant truces with the Franks and offered territorial concessions to keep the peace. Bohemond III of Antioch did not include his recalcitrant Cilician Armenian vassal Leo in his truce with Saladin in 1192. Leo was Ruben III's brother. When Ruben died, Leo replaced his daughter and heir, Alice. In 1191, Saladin abandoned a three-year occupation of the northern Syrian castle of Bagras, and Leo seized it, ignoring claims of the Templars and Bohemond. In 1194, Bohemond accepted Leo's invitation to discuss Bagras' return, but Leo imprisoned him, demanding Antioch for his release. The Greek population and the Italian community rejected the Armenians, and formed a commune under Bohemond's eldest son, Raymond. Bohemond was released when he abandoned his claims on Cilicia, forfeiting Bagras and marrying Raymond to Alice. Any male heir of this marriage was expected to be the heir to both Antioch and Armenia. When Raymond died in 1197, Bohemond sent Alice and Raymond's posthumous son Raymond-Roupen to Cilicia. Raymond's younger brother Bohemond IV came to Antioch, and the commune recognised him as their father's heir. In September 1197, Henry of Champagne died after falling out a palace window in the kingdom's new capital Acre. The widowed Isabella married Aimery of Lusignan who had succeeded Guy in Cyprus. Saladin's ambitious brother Al-Adil I, reunited Egypt and Damascus under his rule by 1200. He expanded the truces with the Franks and enhanced commercial contacts with Venice and Pisa. Bohemond III died in 1201. The commune of Antioch renewed its allegiance to Bohemond IV, although several nobles felt compelled to support Raymond-Roupen and joined him in Cilicia. Leo of Cilicia launched a series of military campaigns to assert Raymond-Roupen's claim to Antioch. Bohemond made alliances with Saladin's son, Az-Zahir Ghazi of Aleppo, and with Suleiman II, the Sultan of Rum. As neither Bohemond nor Leo could muster enough troops to defend their Tripolitan or Cilician hinterland against enemy invasions or rebellious aristocrats and to garrison Antioch simultaneously, the War of the Antiochene Succession lasted for more than a decade. The Franks knew they could not regain the Holy Land without conquering Egypt. The leaders of the Fourth Crusade planned an invasion of Egypt but sacked Constantinople instead. Aimery and Isabella died in 1205. Isabella's daughter by Conrad, Maria of Montferrat, succeeded, and Isabella's half-brother, John of Ibelin, became regent. The regency ended with Maria's marriage in 1210 to John of Brienne, a French aristocrat and experienced soldier. After her death two years later, John ruled as regent for their infant daughter, Isabella II. He participated in a military campaign against Cilicia, but it did not damage Leo's power. Leo and Raymond-Roupen had exhausted Antioch with destructive raids and occupied the city in 1216. Raymond-Roupen was installed as prince and Leo restored Bagras to the Templars. Raymond-Roupen could not pay for the aristocrats' loyalty in his impoverished principality and Bohemond regained Antioch with local support in 1219. The personal union between Antioch and Tripoli proved lasting, but in fact both crusader states disintegrated into small city-states. Raymond-Roupen fled to Cilicia, seeking Leo's support, and when Leo died in May, attempted to gain the throne against Leo's infant daughter Isabella. John of Brienne was leader of a gathering crusade but Frederick II, the ruler of Germany and Sicily, was expected to assume control on his arrival; the papal legate, Cardinal Pelagius, controlled the finances from the west. The crusaders invaded Egypt and captured Damietta in November 1219. The new sultan of Egypt Al-Kamil repeatedly offered the return of Jerusalem and the Holy Land in exchange for the crusaders' withdrawal. His ability to implement his truce proposals was questionable for his brother Al-Mu'azzam Isa ruled the Holy Land. The crusaders knew that their hold on the territory would not be secure as long as the castles in Oultrejourdain remained in Muslim hands. Prophecies about their inevitable victory spread in their camp, and Al-Adil's offer was rejected. After twenty-one months of stalemate, the crusaders marched on Cairo before being trapped between the Nile floods and the Egyptian army. The crusaders surrendered Damietta in return for safe conduct, ending the crusade. While staying in Damietta, Cardinal Pelagius sent reinforcements to Raymond-Roupen in Cilicia, but Constantine of Baberon, who was regent for the Cilician queen, acted quickly. He captured Raymond-Roupen, who died in prison. The queen was married to Bohemond's son, Philip to cement an alliance between Cilicia and Antioch. A feud between the two nations broke out again after neglected Armenian aristocrats murdered Philip in late 1224. An alliance between the Armenians and his former Ayyubid allies in Aleppo foiled Bohemond's attempts at revenge. Frederick renewed his crusader oath on his imperial coronation in Rome in 1220. He did not join the Egyptian crusade but reopened the negotiations with Al-Adil over the city of Jerusalem. In 1225, Frederick married Isabella II and assumed the title of king of Jerusalem. Two years later, Al-Adil promised to abandon all lands conquered by Saladin in return for Frankish support against Al-Mu'azzam. An epidemic prevented Frederick's departure for a crusade, and Pope Gregory IX excommunicated him for repeatedly breaking his oath. In April 1228, Isabella died after giving birth to Conrad. Without seeking a reconciliation with the Pope, Frederick sailed for the crusade. His attempts to confiscate baronial fiefs brought him into conflict with the Frankish aristocrats. As Al-Mua'zzam had died, Frederick made the most of his diplomatic skills to achieve the partial implementation of Al-Adil's previous promise. They signed a truce for ten years, ten months, and ten days (the maximum period for a peace treaty between Muslims and Christians, according to Muslim custom). It restored Jerusalem, Bethlehem, Nazareth and Sidon to the Franks while granting Temple Mount to the Muslims. The native Franks were unenthusiastic about the treaty because it was questionable whether it could be defended. Frederick left for Italy in May 1229, and never returned. He sent Richard Filangieri, with an army, to rule the kingdom of Jerusalem as his bailli. The Ibelins denied Frederick's right to appoint his lieutenant without consulting the barons, and Outremer plunged into a civil war, known as the War of the Lombards. Filangieri occupied Beirut and Tyre, but the Ibelins and their allies firmly kept Acre and established a commune to protect their interest. Pope Gregory IX called for a new crusade in preparation for the expiry of the truce. Between 1239 and 1241, wealthy French and English nobles like Theobald I of Navarre and Richard of Cornwall led separate military campaigns to the Holy Land. They followed Frederick's tactics of forceful diplomacy and played rival factions off against each other in the succession disputes that followed Al-Kamil's death. Richard's treaty with Al-Kamil's son, As-Salih Ayyub, restored most land west of the Jordan River to the Franks. Conrad reached the age of majority in 1243 but failed to visit Outremer. Arguing that Conrad's heir presumptive was entitled to rule in his absence, the Jerusalemite barons elected his mother's maternal aunt, Alice of Champagne, as regent. The same year, they captured Tyre, the last centre of Frederick's authority in the kingdom. The Mongol Empire's westward expansion reached the Middle East when the Mongols conquered the Khwarazmian Empire in Central Asia in 1227. Part of the Khwarazmian army fled to eastern Anatolia and these masterless Turkic soldiers offered their services to the neighbouring rulers for pay. Western Christians regarded the Mongols as potential allies against the Muslims because some Mongol tribes adhered to Nestorian Christianity. In fact, most Mongols were pagans with a strong belief in their Great Khan's divine right to universal rule, and they demanded unconditional submission from both Christians and Muslims. As-Salih Ayyub hired the Khwarazmians and garrisoned new mamluk troops in Egypt, alarming his uncle As-Salih Ismail, Emir of Damascus. Ismail bought the Franks' alliance by a promise to restore 'all the lands that Saladin had reconquered'. Catholic priests took possession of the Dome of the Rock, but in July 1244 Khwarazmians marching towards Egypt sacked Jerusalem unexpectedly. The Franks gathered all available troops and joined Ismail near Gaza, but the Khwarazmians and Egyptians defeated the Frankish and Damascene coalition at La Forbie on 18 October. Few Franks escaped from the battlefield. As-Salah captured most of the crusaders' mainland territory, restricting the Franks to a few coastal towns. Louis IX of France launched a failed crusade against Egypt in 1249. He was captured near Damietta with the remnants of his army, and ransomed days after the Bahri Mamluks assumed power in Egypt through murdering As-Salih's son Al-Muazzam Turanshah in May 1250. Louis spent four more years in Outremer. As the kingdom's effective ruler, he conducted negotiations with both the Syrian Ayyubids and the Egyptian Mamluks and refortified the coastal towns. He sent an embassy from Acre to the Great Khan Güyük, offering an anti-Muslim alliance to the Mongols. Feuds between rival candidates to the regency and commercial conflicts between Venice and Genoa resulted in a new civil war in 1256 known as the War of Saint Sabas. The pro-Venetian Bohemond VI's conflict with his Genoese vassals the Embriaci brought the war to Tripoli and Antioch. In 1258, the Ilkhan Hulagu, younger brother of the Great Khan Möngke, sacked Baghdad and ended the Abbasid Caliphate. Two years later, Hethum I of Cilicia and Bohemond VI joined forces with the Mongols in the sack of Aleppo, when Bohemond set fire to its mosque, and in the conquest of northern Syria. The Mongols emancipated the Christians from their dhimmi status, and the local Christian population cooperated with the conquerors. Jerusalem remained neutral when the Mamluks of Egypt moved to confront the Mongols after Hulagu, and much of his force moved east on the death of Möngke to address the Mongol succession. The Mamluks defeated the greatly reduced Mongol army at Ain Jalut. On their return, the Mamluk sultan Qutuz was assassinated and replaced by the general Baibars. Baibars revived Saladin's empire by uniting Egypt and Syria and held Hulagu in check through an alliance with the Mongols of the Golden Horde. He reformed governance in Egypt, giving power to the elite mamluks. The Franks did not have the military capability to resist this new threat. A Mongol garrison was stationed at Antioch, and individual Frankish barons concluded separate truces with Baibars. Determined to conquer the crusader states, he captured Caesarea and Arsuf in 1265 and Safed in 1266, and sacked Antioch in 1268. Jaffa surrendered and Baibers weakened the military orders by capturing the castles of Krak des Chevaliers and Montfort before returning his attention to the Mongols of the Ilkhanate for the rest of his life. Massacres of the Franks and native Christians regularly followed a Mamluk conquest. In 1268, the new Sicilian king Charles I of Anjou executed Conradin, the titular king of Jerusalem, in Naples after his victory at Tagliacozzo. Isabella I's great-grandson Hugh III of Cyprus and her granddaughter Maria of Antioch disputed the succession. The barons preferred Hugh, but in 1277 Maria sold her claim to Charles. He sent Roger of San Severino to act as bailli. With the support of the Templars, he blocked Hugh's access to Acre, forcing him to retreat to Cyprus, again leaving the kingdom without a resident monarch. The Mongols of the Ilkhanate sent embassies to Europe proposing anti-Mamluk alliances, but the major western rulers were reluctant to launch a new crusade for the Holy Land. The War of the Sicilian Vespers weakened Charles's position in the west. After his death in 1285, Henry II of Cyprus was acknowledged as Jerusalem's nominal king, but the rump kingdom was in fact a mosaic of autonomous lordships, some under Mamluk suzerainty. In 1285, the death of the warlike Ilkhan Abaqa, combined with the Pisan and Venetian wars with the Genoese, finally gave the Mamluk sultan, Al-Mansur Qalawun, the opportunity to expel the Franks. In 1289 he destroyed Genoese-held Tripoli, enslaving or killing its residents. In 1290, Italian crusaders broke his truce with Jerusalem by killing Muslim traders in Acre. Qalawun's death did not hinder the successful Mamluk siege of the city in 1291. Those who could fled to Cyprus, while those who could not were slaughtered or sold into slavery. Without hope of support from the West, Tyre, Beirut, and Sidon all surrendered without a fight. The Mamluk policy was to destroy all physical evidence of the Franks; the destruction of the ports and fortified towns ruptured the history of a coastal city civilisation rooted in antiquity. Government and institutions Modern historiography has focused on the kingdom of Jerusalem. Possibly this is related to it being the objective of the First Crusade, as well as the perception of the city being the centre and chief city of medieval Christendom. However, research into the kingdom does not provide a comprehensive common template for the development of the other Latin settlements. Jerusalem's royal administration was based in the city until it was lost, and then in Acre. It featured the typical household officers of most European rulers: a cleric-led chancery, constable, marshal, chamberlain, chancellor, seneschal and butler. Royal territory was directly administered by Viscounts. All tangible evidence of the written law was lost in 1187 when the Franks lost the city of Jerusalem to the Muslims. The courts of the princes of Antioch were similar and created the Italo-Norman laws that were also later adopted by Cilician Armenia, known as the Assizes of Antioch. These have survived in 13th-century Armenian translations. Relationships among Antioch's various Frankish, Syrian, Greek, Jewish, Armenian, and Muslim inhabitants were generally good. The brief existence of the uniquely-landlocked Edessa means it is the least studied, but its history is traceable to Armenian and Syriac chronicles in addition to Latin sources. Like Jerusalem the political institutions appear to have reflected the northern French roots of the founders, although the membership of city councils included indigenous Christians. The population was diverse, including Armenian Orthodox, Greeks known as Melkites, Syrian Orthodox known as Jacobites, and Muslims. In Tripoli, the fourth Frankish state, Raymond of Saint-Gilles and his successors ruled directly over several towns, granting the rest as fiefs to lords originating in Languedoc and Provence, and Gibelet was given to the Genoese in return for naval support. In the 12th century this system provided a total of 300 knights, a much smaller army than Antioch or Jerusalem. Architectural and artistic activity in Lebanese churches provide evidence that the indigenous populations prospered under Frankish rule, in part due to its remoteness from the worst impacts of Saladin's conquests in 1187–1188. These were Arabic-speaking Melkites, Monophysites, Nestorians, Syrians, and large numbers of Syriac-speaking Maronites with their own clerical hierarchies. The Greek Orthodox Church was restricted, as in Jerusalem. There were similar self-governing Muslim communities of Druze and Alawites, including Isma'ili, in the frontier areas to the north. The multi-ethnic structure may well have been more pronounced in Tripoli and in the 12th century there may have been a southern French culture, although this characteristic faded over time. The king of Jerusalem's foremost role was as leader of the feudal host during the near-constant warfare in the early decades of the 12th century. The kings rarely awarded land or lordships, and those they did frequently became vacant and reverted to the crown because of the high mortality rate. Their followers' loyalty was rewarded with city incomes. Through this, the domain of the first five rulers was larger than the combined holdings of the nobility. These kings of Jerusalem had greater internal power than comparative western monarchs, but they lacked the personnel and administrative systems necessary to govern such a large realm. In the second quarter of the century, magnates like Raynald of Châtillon, Lord of Oultrejordain, and Raymond III, Count of Tripoli, Prince of Galilee, established baronial dynasties and often acted as autonomous rulers. Royal powers were done away with, and governance was undertaken within the feudatories. The remaining central control was exercised at the High Court or Haute Cour, which was also known in Latin as Curia generalis and Curia regis, or in vernacular French as parlement. These meetings were between king and tenants in chief. The duty of the vassal to give counsel developed into a privilege and then the monarch's legitimacy depended on the court's agreement. The High Court was the great barons' and the king's direct vassals. It had a quorum of the king and three tenants-in-chief. In 1162, the assise sur la ligece (roughly, 'Assize on liege-homage') expanded the court's membership to all 600 or more fief-holders. Those paying direct homage to the king became members of the Haute Cour. By the end of the 12th century, they were joined by the leaders of the military orders and in the 13th century the Italian communes. The leaders of the Third Crusade ignored the monarchy. The kings of England and France agreed on the division of future conquests, as if there was no need to consider the local nobility. Prawer felt the weakness of the crown of Jerusalem was demonstrated by the rapid offering of the throne to Conrad of Montferrat in 1190 and then Henry II, Count of Champagne, in 1192 although this was given legal effect by Baldwin IV's will stipulating if Baldwin V died a minor, the pope, the kings of England and France, and the Holy Roman Emperor would decide the succession. Prior to the 1187 defeat at Hattin, laws developed by the court were recorded as assises in Letters of the Holy Sepulchre. All written law was lost in the fall of Jerusalem. The legal system was now largely based on custom and the memory of the lost legislation. The renowned jurist Philip of Novara lamented, 'We know [the laws] rather poorly, for they are known by hearsay and usage...and we think an assize is something we have seen as an assize...in the kingdom of Jerusalem [the barons] made much better use of the laws and acted on them more surely before the land was lost.' An idyllic view of the early 12th century legal system was created. The barons reinterpreted the assise sur la ligece—which Almalric I intended to strengthen the crown—to constrain the monarch instead, particularly regarding the monarch's right to confiscate feudal fiefs without trial. The loss of the vast majority of rural fiefs led the baronage to evolve into an urban mercantile class where knowledge of the law was a valuable, well-regarded skill and a career path to higher status. After Hattin, the Franks lost their cities, lands, and churches. Barons fled to Cyprus and intermarried with leading new emigres from the Lusignan, Montbéliard, Brienne and Montfort families. This created a separate class—the remnants of the old nobility with a limited understanding of the Latin East. This included the king-consorts Guy, Conrad, Henry, Aimery, John, and the absent Hohenstaufen dynasty that followed. The barons of Jerusalem in the 13th century have been poorly regarded by both contemporary and modern commentators: their superficial rhetoric disgusted Jacques de Vitry; Riley-Smith writes of their pedantry and the use of spurious legal justification for political action. The barons valued this ability to articulate the law. This is evidenced by the elaborate and impressive treatises of the baronial jurists from the second half of the 13th century. From May 1229, when Frederick II left the Holy Land to defend his Italian and German lands, monarchs were absent. Conrad was titular king from 1225 until 1254, and his son Conradin until 1268 when Charles of Anjou executed him. The monarchy of Jerusalem had limited power in comparison with the West, where rulers developed bureaucratic machinery for administration, jurisdiction, and legislation through which they exercised control. In 1242 the Barons prevailed and appointed a succession of Ibelin and Cypriot regents. Centralised government collapsed in the face of independence exercised by the nobility, military orders, and Italian communes. The three Cypriot Lusignan kings who succeeded lacked the resources to recover the lost territory. One claimant sold the title of king to Charles of Anjou. He gained power for a short while but never visited the kingdom. Military All estimates of the size of Frankish and Muslim armies are uncertain; existing accounts indicate that it is probable that the Franks of Outremer raised the largest armies in the Catholic world. As early as 1111, the four crusader states fielded 16,000 troops to launch a joint military campaign against Shaizar. Edessa and Tripoli raised armies numbering 1,000–3,000 troops, Antioch and Jerusalem deployed 4,000–6,000 soldiers. In comparison, William the Conqueror commanded 5,000–7,000 troops at Hastings and 12,000 crusaders fought against the Moors at Las Navas de Tolosa in Iberia. Among the Franks' early enemies, the Fatimids possessed 10,000–12,000 troops, the rulers of Aleppo had 7,000–8,000 soldiers, and the Damascene atabegs commanded 2,000–5,000 troops. The Artuqids could hire as many as 30,000 Turks, but these nomadic warriors were unfit for lengthy sieges. After uniting Egypt, Syria, and much of Iraq, Saladin raised armies around 20,000 strong. Egyptian money and Syrian manpower represented the two critical factors underpinning Saladin's military might during the period. In response, the Franks quickly increased their military force up to around 18,000 troops by 1183, but not without implementing austerity measures. In the 13th century, the control of Acre's lucrative commerce provided the resources to maintain sizeable armies. At La Forbie, 16,000 Frankish warriors perished in the battlefield, but this was the last occasion when a united Jerusalemite army fought a pitched battle. During the 1291 siege of Acre, about 15,000 Frankish troops defended the city against more than 60,000 Mamluk warriors. The crusader states' military power depended mainly on four major categories of soldiery: vassals, mercenaries, visitors from the west, and troops provided by the military orders. Vassals were expected to perform their military duties in person as fully armed knights, or more lightly armoured serjants. Unmarried female fief holders had to hire mercenaries; their wards represented underage vassals. Disabled men and men over sixty were required to cede their horses and arms to their lords. Vassals who owed the service of more than one soldier had to mobilise their own vassals or employ mercenaries. A feudal lord's army could be sizeable. For example, 60 cavalrymen and 100 footmen accompanied Richard of Salerno, then lord of Marash, during a joint Antiochene–Edessene campaign against Mawdud in 1111. Complaints about Frankish rulers' difficulties in paying off their troops abound, showing the importance of mercenary troops in Levantine warfare. Mercenaries were hired regularly for military campaigns, for garrisoning forts and particularly in Antioch, for serving in the prince's armed retinue. The crusader states could have hardly survived without constant support from the west. Armed pilgrims arriving at moments of crisis could save the day, like those who landed just after Baldwin I's defeat at Ramla in 1102. Westerners were unwilling to accept the Frankish leaders' authority. The military orders emerged as a new form of religious organisation in response to the unstable conditions at western Christendom's borderlands. The first of them, the Knights Templar, developed from a knightly brotherhood attached to the Church of the Holy Sepulchre. Around 1119, the knights took the monastic vows of chastity, poverty, and obedience and committed themselves to the armed protection of pilgrims visiting Jerusalem. This unusual combination of monastic and knightly ideas did not meet with general approval, but the Templars found an influential protector in the prominent Cistercian abbot Bernard of Clairvaux. Their monastic rule was confirmed at the Council of Troyes in France in 1129. The name derives from Solomon's Temple, the Frankish name for the Al-Aqsa Mosque where they established their first headquarters. The Templars' commitment to the defence of fellow Christians proved an attractive idea, stimulating the establishment of new military orders, in Outremer always by the militarisation of charitable organisations. The Hospitallers represents the earliest example. Originally a nursing confraternity at a Jerusalemite hospital founded by merchants from Amalfi, they assumed military functions in the 1130s. Three further military orders followed in the Levant: the Order of Saint Lazarus mainly for leper knights in the 1130s, the German Order of Teutonic Knights in 1198, and the English Order of St Thomas of Acre in 1228. As frequent beneficiaries of pious donations across Europe and the Levant, the Hospitallers, Templars, and to a lesser extent the Teutonic Knights accumulated considerable wealth. They administered their scattered estates through an extensive network of branch houses, each required to transfer a part—generally one-third—of its revenues to the Jerusalemite headquarters. As the regular transfer of goods and money required the development of complex logistical and financial systems, the three orders operated as early forms of supranational trading houses and credit institutions. Their networks facilitated international money transfers, because funds deposited in a branch could be paid out in another, and loans granted in one country could be repaid in another. The Hospitallers never abandoned charitable work. In Jerusalem, their hospital served hundreds of patients of all religions and genders. Pilgrims, pregnant women, abandoned children, and impoverished people could also enlist their aid. However, waging war against infidels remained the military orders' prime obligation. As an early example of a standing army, they had a pivotal role in the crusader states' defence. The knight-brothers and their armed servants were professional soldiers under monastic vows. They wore a habit, always with a cross on it, and showing its wearer's rank. As lay rulers and aristocrats seldom had the funds to cover all costs of border defence, they eagerly ceded their border forts to the military orders. The earliest examples include Beth Gibelin in Jerusalem, and Krak des Chevaliers in Tripoli, both seized by the Hospitallers. Companies of highly trained mounted knights made up the central element of Frankish armies. Their military expertise and outstanding unit cohesion distinguished them from the Byzantine and Muslim heavy cavalry. Frankish foot soldiers were disciplined to cooperate closely with the knights and to defend them against attacks by the Turkic light cavalry. The Frankish armies' distinctive feature was the extensive deployment of foot soldiers equipped with crossbows; Muslim commanders employed crossbowmen almost exclusively in a siege situation. Native Christians and converted Turks along with some Franks served as lightly armoured cavalrymen, called turcopoles. They were positioned to fight against the Turkic light cavalry and were well suited for raids. The Frankish knights fought in close order formation and applied tactics to enhance the impact of a cavalry charge. Examples include surprise attacks at dawn and chasing herds of cattle towards an enemy camp. During a Frankish cavalry charge, the Muslim troops attempted to avoid a direct clash until the knights were separated from the infantry and their horses became exhausted. Frankish foot soldiers could create a 'shield-roof' against the rain of Turkish arrows. Feigned retreat was a tactic used by both Muslim and Frankish troops, although Christian chroniclers considered it shameful. In a siege situation, the Franks avoided direct assaults. Instead, they imposed a blockade on the besieged town and starved the defenders into submission. By contrast, Muslim commanders preferred direct attacks as they could easily muster new troops to replace those who had perished. Both sides employed similar siege engines, including wooden siege towers, battering rams, mangonels, and from the 1150s large trebuchets. The extensive use of carrier pigeons and signal fires was an important element of Muslim warfare. As Muslim commanders were informed of the Franks' movements in time, they could intercept Frankish invaders unexpectedly. In comparison with contemporaneous Europe, battles were not uncommon in Outremer. The Franks fought battles mainly in defensive situations. They adopted delaying tactics only when they obviously had no chance to defeat a large invading force, like during Saladin's invasion of Antioch in 1187 and Mamluk attacks against Outremer in the 1260s. While on the offensive, the Franks typically risked pitched battles if they could gain substantial territory and a local faction supported their campaign. As the Franks were unable to absorb casualties as effectively as their enemies, a defeat in a major battle could put the very existence of a crusader state at risk. Examples include the shrinking of Antiochene territory after the defeat of an Antiochene–Edessene coalition at the Battle of Harran in 1104 and the territorial consequences of Saladin's triumph at Hattin. From the 1150s, observers like the chroniclers Michael the Syrian and Ali ibn al-Athir concluded that the Franks' military skills had weakened. In fact, the Franks could still launch long-distance campaigns against Egypt and resist enemy attacks without adequate provisions for days. Consequently, as historian Nicholas Morton proposes, their defeats could more likely be attributed to their enemies' flexibility. The Muslims had learnt how to solve their own shortcomings and take advantage of the Franks' weaknesses. Muslim rulers intensified jihād propaganda to curb ethnic tensions, while disputes between Frankish and western commanders prevented their effective cooperation. The Muslim commanders adopted new tactics against the heavily armoured knights, like the sudden division of their ranks during a cavalry charge. In contrast, the Franks could not compete with their enemies' swiftness. In a siege situation, they insisted on the deployment of siege towers, although a tower's construction lasted for four to six weeks, and during this period, relief forces could reach the besieged town or fortress. By contrast, the Muslims preferred quick mining operations like digging under ramparts or burning walls. Demography Without solid documentary basis modern calculations about the size of the crusader states' population are only guesswork. Medieval chronicles contain demographic data, but they mostly present exaggerated numbers, without differentiating Franks and native Christians. Calculations about the population of a town are based on reports of a siege when refugees from nearby villages had multiplied it. Estimations of the number of Franks in Outremer range between 120,000 and 300,000. If these numbers are credible, Franks made up at least 15% of the crusader states' total population. In context, Josiah Russell estimates the population of what he calls 'Islamic territory' as roughly 12.5 million in 1000—Anatolia 8 million, Syria 2 million, Egypt 1.5 million and North Africa 1 million—with the European areas that provided crusaders having a population of 23.7 million. He estimates that by 1200 these figures had risen to 13.7 million in Islamic territory—Anatolia 7 million, Syria 2.7 million, Egypt 2.5 million and North Africa 1.5 million—while the crusaders' home countries' population was 35.6 million. Russell also acknowledges that much of Anatolia was Christian or under the Byzantines and that some purportedly Islamic areas such as Mosul and Baghdad also had significant Christian populations. Immigration from Catholic Europe was continuous till the end of the crusader states. Although most colonists settled in the coastal cities, the Franks' presence is documented in more than 200 villages (about 15% of all rural settlements) in the Kingdom of Jerusalem. Some Frankish rural settlements were planned villages, established to encourage settlers from the West; some were shared with native Christians. The native Jewish population was concentrated in the four holy cities of Jerusalem, Hebron, Safed and Tiberias. From the late 12th century refugees from territories lost to the Muslims increased the Christian population of the coastal cities, but emigration to Cyprus or Frankish Greece can also be detected. The expansion of the urban population is most obvious at Acre where a new suburb developed following the Third Crusade. Emigration from Outremer intensified from the 1240s as prospects of the crusader states' survival darkened. In this period, a massive influx of Frankish and native Christian refugees to Cyprus is well documented. Franks who did not flee could survive the Mamluk conquest as slaves or renegades: a Franciscan friar met with Frankish prisoners of war and converts to Islam at Acre more than a decade after the fall of the city. Immigration from Jewish communities in Europe also increased from this period due to the travel routes having been opened up by Christian pilgrims. For example, in 1211 hundreds of Rabbis immigrated to the Crusader state. Society Modern research indicates that Jews, Muslims and local Christian populations were less integrated than previously thought. Christians lived around Jerusalem and in an arc stretching from Jericho and the Jordan River to Hebron in the south. Comparisons of archaeological evidence of Byzantine churches built prior to the Muslim conquest and 16th century Ottoman census records shows some Greek Orthodox communities disappeared before the crusades, but most continued during and for centuries after. Maronites were concentrated in Tripoli; Jacobites in Antioch and Edessa. Armenians were concentrated in the north, but communities existed in all major towns. Central areas had a predominantly Sunni Muslim population, but Shi'ite communities existed in Galilee. Muslim Druze lived in the mountains of Tripoli. Jews lived in coastal towns and some Galilean towns and in the holy cities Jerusalem, Safed, Hebron and Tiberias. . Little research has been done on Islamic conversion, but the available evidence led Ellenblum to believe that around Nablus and Jerusalem Christians remained a majority. Most of the indigenous population were peasants living off the land. Charters from the early 12th century show evidence of the donation of local villeins (free serfs) to nobles and religious institutions. This may have been a method of denoting the revenues from these villeins or land where the boundaries were unclear. These are described as villanus, surianus for Christians or sarracenus for Muslims. The term servus was reserved for the many urban domestic slaves the Franks held. The use of villanus is thought to reflect the higher status that villagers or serfs held in the near East; indigenous men were considered to have servile land tenures rather than lacking personal freedom. Villeins' status differed from Western serfs as they could marry outside their lords' domain, were not obliged to perform unpaid labour, could hold land and inherit property. However, Franks needed to maintain productivity, so the villagers were tied to the land. Charters show landholders agreeing to return any villeins from other landholders they found on their property. Peasants were required to pay the lord one quarter to a half of crop yields. The Muslim pilgrim Ibn Jubayr reported there was a poll tax of one dinar and five qirat per head and a tax on produce from trees. 13th century charters indicate this increased after the loss of the first kingdom to redress the Franks' lost income. Historian Christopher MacEvitt cites these as reasons that the term "indentured peasant" is a more accurate description for the villagers in the Latin East than serf. Linguistic differences remained a key differentiator between the Franks' lords and the local population. The Franks typically spoke Old French and wrote in Latin. While some learnt Arabic, Greek, Armenian, Syriac and Hebrew, this was unusual. Society was politically and legally stratified. Ethnically based communities were self-governing with relations between communities controlled by the Franks. Research has focused on the role of the ruʾasāʾ, Arabic for leader, chief or mayor. Riley-Smith divided these into the urban freemen and rural workers tied to the land; ruʾasāʾ administered the Frankish estates, governed the native communities, and were often respected local landowners. If communities were segregated, as indicated by the written evidence and identified by Riley-Smith and Prawer, inter-communal conflict was avoided and interaction between the landed and the peasants limited. McEvitt identifies possible tension between competing groups. According to the 13th century jurists, in the towns the Rais ruʾasāʾ presided over the Cour des Syriens and there is other evidence they led local troops occasionally. The courts of the indigenous communities administered civil disputes and minor criminality. The Frankish cour des bourgeois—courts of the burgesses, which is the name given to non-noble Franks, dealt with more serious offences and cases involving Franks. The level of assimilation is difficult to identify, as there is little material evidence. The archaeology is culturally exclusive and written evidence indicates deep religious divisions. Some historians assume the states' heterogeneity eroded formal apartheid. The key differentiator in status and economic position was between urban and rural dwellers. Indigenous Christians could gain higher status and acquire wealth through commerce and industry in towns, but few Muslims lived in urban areas except those in servitude. Frankish royalty reflected the region's diversity. Queen Melisende was part Armenian and married Fulk from Anjou. Their son Amalric married a Frank from the Levant before marrying a Byzantine Greek. The nobility's use of Jewish, Syrian, and Muslim physicians appalled William of Tyre. Antioch became a centre of cultural interchange through Greek- and Arabic-speaking Christians. The indigenous peoples showed the Frankish nobility traditional deference and in return Franks adopted their dress, food, housing, and military techniques. However, Frankish society was not a cultural melting pot. Inter-communal relations were shallow, identities separate, and the other communities considered alien. Economy The crusader states were economic centres connected by sea routes with Europe and by land with Mesopotamia, Syria and the urban economies of the Nile. Commerce continued with the coastal cities providing maritime outlets for the Islamic hinterland, and unprecedented volumes of eastern wares were exported to Europe. Byzantine-Muslim mercantile growth may well have occurred in the 12th and 13th centuries, but it is likely that the Crusades hastened this. Western European populations and economies were booming, creating a growing social class that wanted artisanal products and eastern imports. European fleets expanded with better ships, navigation improved, and fare-paying pilgrims subsidised voyages. Largely indigenous agricultural production flourished before the fall of the First Kingdom in 1187 but was negligible afterwards. Franks, Muslims, Jews and indigenous Christians traded crafts in the souks—teeming oriental bazaars of the cities. Olives, grapes, wheat, and barley were the important agricultural products before Saladin's conquests. Glass making and soap production were major industries in towns. Italians, Provençals, and Catalans monopolised shipping, imports, exports, transportation, and banking. Taxes on trade, markets, pilgrims, and industry combined with estate revenue to provide the Frankish nobles and church with income. Seigniorial monopolies, or bans, compelled the use of landowners' mills, ovens and other facilities. The presence of hand-mills in most households is evidence of the serfs' circumvention of some monopolies. The centres of production were Antioch, Tripoli, Tyre, and Beirut. Textiles, with silk particularly prized, glass, dyestuffs, olives, wine, sesame oil, and sugar were exported. The Franks provided an import market for clothing and finished goods. They adopted the more monetised indigenous economic system using a hybrid coinage of northern Italian and southern French silver European coins; Frankish copper coins minted in Arabic and Byzantine styles; and silver and gold dirhams and dinars. After 1124, the Franks copied Egyptian dinars, creating Jerusalem's gold bezant. Following the collapse of the first kingdom of Jerusalem in 1187, trade replaced agriculture in the economy, and the circulation of Western European coins predominated. Although Tyre, Sidon and Beirut minted silver pennies and copper coins, there is little evidence of systematic attempts to create a unified currency. The Italian maritime republics of Pisa, Venice, and Genoa were enthusiastic crusaders whose commercial wealth provided the Franks with financial foundations and naval resources. In return, these cities and others, like Amalfi, Barcelona, and Marseilles, received commercial rights and access to Eastern markets. Over time, this developed into colonial communities with property and jurisdiction. Largely located in the ports of Acre, Tyre, Tripoli, and Sidon, communes of Italians, Provençals, and Catalans had distinct cultures and exerted autonomous political power separate from the Franks. They remained intricately linked to their towns of origin, giving them monopolies over foreign trade, banking, and shipping. Opportunities to extend trade privileges were taken. In 1124, for example, the Venetians received one-third of Tyre and its territories with exemption from taxes in return for Venetian participation in the siege. These ports were unable to replace Alexandria and Constantinople as the major commercial centres of commerce but competed with monarchs and each other to maintain economic advantage. The number of communes never reached more than the hundreds. Their power derived from the support of home cities. By the mid-13th century, the rulers of the communes barely recognised the authority of the Franks and divided Acre into several fortified miniature republics. Art and architecture Prawer argued no major Western European cultural figure settled in the states, but that others were encouraged East by the expression of imagery in Western European poetry. Historians believe that military architecture demonstrates a synthesis of the European, Byzantine and Muslim traditions providing the original and impressive artistic achievement of the crusades. Castles were a symbol of the dominance of the Frankish minority over a hostile majority population that acted as administrative centres. Modern historiography rejects the 19th-century consensus that Westerner Europeans learnt the basis of military architecture from the Near East. Europe had already experienced growth in defensive technology. Contact with Arab fortifications originally constructed by the Byzantines influenced developments in the east, but there is little evidence for differentiation between design cultures and the constraints of situation. Castles included oriental design features like large water reservoirs and they excluded occidental features like moats. Church design was in the French Romanesque style seen in the 12th-century rebuilding of the Holy Sepulchre. The Franks retained earlier Byzantine detail, but added northern French, Aquitanian, and Provençal style arches and chapels. The column capitals of the south facade follow classical Syrian patterns, but there is little evidence of indigenous influence in sculpture. Visual culture shows the assimilated nature of the society. The decoration of shrines, painting, and the production of manuscripts demonstrated the influence of indigenous artists. Frankish practitioners borrowed methods from Byzantine and indigenous artists in iconographical practice. Monumental and panel painting, mosaics and illuminations in manuscripts adopted an indigenous style, leading to a cultural synthesis shown in the Church of the Nativity. Wall mosaics were unknown in the west but widespread in the crusader states. It is unknown whether the mosaic work was done by indigenous craftsmen or learnt by Frankish ones, but it shows the evolution of a distinctive and original artistic style. Workshops housed Italian, French, English, and indigenous craftsmen producing illustrated manuscripts showing a cross-fertilisation of ideas and techniques. One example is the Melisende Psalter. This style either reflected or influenced the taste of patrons of the arts in increasingly stylised Byzantine-influenced content. Icons were previously unknown to the Franks. This continued, occasionally in a Frankish style, and of Western European saints leading to Italian panel painting. It is difficult to track illustration and castle design to their sources. It is simpler for textual sources where translations made in Antioch are notable but of secondary importance to the works from Muslim Spain and the hybrid culture of Sicily. Religion There is no written evidence that the Franks or local Christians recognised significant religious differences until the 13th century when the jurists used phrases like men not of the rule of Rome. The crusaders filled Greek Orthodox ecclesiastical positions that became vacant, such as on the death of Simeon II when the Frank Arnulf of Chocques succeeded him as patriarch of Jerusalem. The appointment of Latin bishops had little effect on the Arabic-speaking Orthodox Christians. The previous bishops were foreign Byzantine Greeks. Greeks were used as coadjutor bishops to administer indigenous populations without clergy and in Latin, and Orthodox Christians often shared churches. In Antioch, Greeks occasionally replaced Latin patriarchs. In the town of Gaza, which was under control of the Templars but had previously been a bishopric, the Greek and Syrian population was given to the care of Meletos, bishop of Eleutheropolis. Thus, Meletos also acted as the recognised bishop of Gaza without the ecclesiastic complications that could have occurred had a bishop been assigned to a town ruled by a religious order. Toleration continued, but there was an interventionist papist response from Jacques de Vitry, Bishop of Acre. Armenians, Copts, Jacobites, Nestorians and Maronites had greater religious autonomy independently appointing bishops, as they were considered outside the Catholic Church. The Franks had discriminatory laws against Jews and Muslims that prevented assimilation; overall their legal status was a Latin Christian adaptation of the dhimmi system. They were prevented from inhabiting Jerusalem, there were sumptuary laws that prevented them from wearing Frankish clothing, and the de jure punishment for sexual relations between Muslims and Christians was mutilation. Jews and Muslims retained autonomous systems of religious law. They were discriminated in civil and intercommunal law and had to pay a poll tax (capitātiō). Muslims were marginalised from townlife, but Muslim villagers under crusader dominion seem to have fared equally well as or perhaps better than their counterparts in Muslim lands and Bedouins had a privileged status. Some mosques were converted into Latin Christian churches, especially larger ones, but most remained in Muslim possession and sometimes Muslims could pray in portions of erstwhile mosques in certain places; one such personally experienced episode is described by Usama ibn Munqidh in the early 1140s. There was no forced conversion of Muslims as this would end peasants' servile status, moreover crusaders generally showed no interest in converting Jews, Muslims or Miaphysites to Latin Christianity. Followers of different religions also could engage in shared forms of folk religion, like at the Cave of the Patriarchs where the cultic site had however been divided. Legacy The Franks' habitual following of the customs of their Western European homeland meant that they made few lasting innovations. Three notable exceptions were the military orders, warfare innovations, and fortification developments. No major European poet, theologian, scholar, or historian settled in the region though new imagery and ideas in Western European poetry can be traced to some who visited as pilgrims. Although they did not migrate east themselves, their output often encouraged others to journey there on pilgrimage. Historians believe the crusader military architecture demonstrates a synthesis of the European, Byzantine, and Muslim traditions and that it is the most impressive artistic achievement of the crusades. After Acre fell, the Hospitallers relocated first to Cyprus, then conquered and ruled Rhodes (1309–1522) and Malta (1530–1798). The Sovereign Military Order of Malta survives to the present-day. Philip IV of France probably had financial and political reasons to oppose the Knights Templar. He exerted pressure on Pope Clement V, who responded in 1312 by dissolving the order on probably false grounds of sodomy, magic, and heresy. The raising, transportation, and supply of armies led to flourishing trade between Europe and the crusader states. The Italian city-states of Genoa and Venice flourished through profitable trading communes. Many historians argue that the interaction between western Christian and Islamic cultures was a significant and ultimately positive influence on the development of European civilisation and the Renaissance. Relations between Europeans and the Islamic world stretched across the length of the Mediterranean Sea, making it difficult for historians to identify what proportion of cultural cross-fertilisation originated in the crusader states, Sicily and Spain. Historiography In the 19th century, the crusader states became a subject of study, distinct from the crusades, particularly in France. Joseph François Michaud's influential narratives concentrated on topics of war, conquest, and settlement while France's colonial ambitions in the Levant were linked explicitly. Emmanuel Guillaume-Rey's Les colonies franques de Syrie aux XIIme et XIIIme siècles described Frankish settlements in the Levant as colonies where the offspring of mixed marriages adopted local traditions. The first American crusade historian, Dana Carleton Munro, described the care the Franks took to "win the goodwill of the natives". Historians rejected this approach in the 20th century. R. C. Smail argued it identified an integrated society which did not exist to justify French colonialism. The new consensus was the society was segregated with limited social and cultural interchange. Focusing on evidence of social, legal, and political frameworks in Jerusalem, Joshua Prawer and Jonathan Riley-Smith presented the widely accepted view of a society that was predominantly urban, isolated from the indigenous peoples, with separate legal and religious systems. Prawer's 1972 work, The Latin Kingdom of Jerusalem: European Colonialism in the Middle Ages, extended this analysis: the lack of integration was based on economics with the Franks' position depending on a subjugated, disenfranchised local population. The Franks' primary motivations were economic. Islamic historian Carole Hillenbrand argued that the Islamic population responded with resentment, suspicion, and rejection of the Franks. This has been challenged by historians such as Ronnie Ellenblum using archaeological research, but no alternative model has been accepted. Christopher Tyerman points out this is not a return to older theories as the same sources are used and the archaeology is unprovable. Specialist Denys Pringle notes that it does not contradict the earlier view. Hans Eberhard Mayer had already advised that the number of Franks living in rural settlements should not be underestimated. These theories support the idea that the crusader states formed part of the wider expansion of Western Europe: driven by religious reform and growing papal power. However, historians argue there was no vigorous church reform in the East or resulting persecution of Jews and heretics. Some consider regulations from the 1120 Council of Nablus as exceptional and Benjamin Z. Kedar believed they followed Byzantine, rather than western reformist, precedent. The debate has led historians like Claude Cahen, Jean Richard, and Christopher MacEvitt to argue the history of the crusader states is distinct from the crusades, allowing the application of other analytical techniques that place the crusader states in the context of Near Eastern politics. These ideas are still in the process of articulation by modern historians. See also References Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Quebec] | [TOKENS: 21654] |
Contents Quebec Quebec[a] (French: Québec)[b] is Canada's largest province by area.[c] Located in Central Canada, it is the only Francophone-majority province in the country, being home to Québécois French. It shares borders with the provinces of Ontario to the west, Newfoundland and Labrador to the northeast, New Brunswick to the southeast and a coastal border with the territory of Nunavut. In the south, it shares a border with the United States.[d] Quebec has a population of around 8 million, making it Canada's second-most populous province only behind Ontario. Between 1534 and 1763, what is now Quebec was the French colony of Canada and was the most developed colony in New France. Following the Seven Years' War, Canada became a British colony, first as the Province of Quebec (1763–1791), then Lower Canada (1791–1841), and lastly part of the Province of Canada (1841–1867) as a result of the Lower Canada Rebellion. It was confederated with Ontario, Nova Scotia, and New Brunswick in 1867. Until the early 1960s, the Catholic Church played a large role in the social and cultural institutions in Quebec. However, the Quiet Revolution of the 1960s to 1980s increased the role of the Government of Quebec in l'État québécois (the public authority of Quebec). The Government of Quebec functions within the context of a Westminster system and is both a liberal democracy and a constitutional monarchy. The Premier of Quebec acts as head of government. Independence debates have played a large role in Quebec politics. Quebec society's cohesion and specificity is based on three of its unique statutory documents: the Quebec Charter of Human Rights and Freedoms, the Charter of the French Language, and the Civil Code of Quebec. Furthermore, unlike elsewhere in Canada, law in Quebec is mixed: private law is exercised under a civil-law system, while public law is exercised under a common-law system. The economy of Quebec is mainly supported by its large service sector and varied industrial sector. For exports, it leans on the key industries of aeronautics, hydroelectricity, mining, pharmaceuticals, aluminum, wood, and paper. Quebec is well known for producing maple syrup, for its comedy, and for making hockey one of the most popular sports in Canada. It is also renowned its distinct culture; the province produces literature, music, films, TV shows, festivals, and more. Etymology The name Québec comes from an Algonquin word meaning 'narrow passage' or 'strait'. The name originally referred to the area around Quebec City where the Saint Lawrence River narrows to a cliff-lined gap. Early variations in the spelling included Québecq and Kébec. French explorer Samuel de Champlain chose the name Québec in 1608 for the colonial outpost he would use as the administrative seat for New France. History The Paleo-Indians, theorized to have migrated from Asia to America between 20,000 and 14,000 years ago, were the first people to establish themselves on the lands of Quebec, arriving after the Laurentide Ice Sheet melted roughly 11,000 years ago. From them, many ethnocultural groups emerged. By the European explorations of the 1500s, there were eleven Indigenous peoples: the Inuit and ten First Nations – the Abenaki, Algonquin (or Anichinabés), Atikamekw, Cree, Huron-Wendat, Wolastoqiyik, Miꞌkmaq, Iroquois, Innu and Naskapi. Algonquians organized into seven political entities and lived nomadic lives based on hunting, gathering, and fishing. Inuit fished and hunted whales and seals along the coasts of Hudson and Ungava Bays. In the 15th century, the Byzantine Empire fell, prompting Western Europeans to search for new sea routes to the Far East. As such, around 1522–23, Giovanni da Verrazzano persuaded King Francis I of France to commission an expedition to find a western route to Cathay (China) via a Northwest Passage. Though this expedition was unsuccessful, it established the name New France for northeast North America. In his first expedition ordered from the Kingdom of France, Jacques Cartier became the first European explorer to discover and map Quebec when he landed in Gaspé on July 24, 1534. In the second expedition, in 1535, Cartier explored the lands of Stadacona and named the village and its surrounding territories Canada (from kanata, 'village' in Iroquois). Cartier returned to France with about 10 St. Lawrence Iroquoians, including Chief Donnacona. In 1540, Donnacona told the legend of the Kingdom of Saguenay to the King, inspiring him to order a third expedition, this time led by Jean-François de La Rocque de Roberval; it was unsuccessful in its goal of finding the kingdom. After these expeditions, France mostly abandoned North America for 50 years because of its financial crisis; France was involved in the Italian Wars and religious wars. Around 1580, the rise of the fur trade reignited French interest; New France became a colonial trading post. In 1603, Samuel de Champlain travelled to the Saint Lawrence River and, on Pointe Saint-Mathieu, established a defence pact with the Innu, Wolastoqiyik and Mi'kmaq, that would be "a decisive factor in the maintenance of a French colonial enterprise in America despite an enormous numerical disadvantage vis-à-vis the British". Thus also began French military support to the Algonquian and Huron peoples against Iroquois attacks; these became known as the Iroquois Wars and lasted from the early 1600s to the early 1700s. In 1608, Samuel de Champlain returned to the region as head of an exploration party. On July 3, 1608, with the support of King Henry IV, he founded the Habitation de Québec (now Quebec City) and made it the capital of New France and its regions. The settlement was built as a permanent fur trading outpost, where First Nations traded furs for French goods, such as metal objects, guns, alcohol, and clothing. Missionary groups arrived in New France after the founding of Quebec City. Coureurs des bois and Catholic missionaries used river canoes to explore the interior and establish fur trading forts. The Compagnie des Cent-Associés, which had been granted a royal mandate to manage New France in 1627, introduced the Custom of Paris and the seigneurial system, and forbade settlement by anyone other than Catholics. In 1629, Quebec City surrendered, without battle, to English privateers during the Anglo-French War; in 1632, the English king agreed to return it with the Treaty of Saint-Germain-en-Laye. Trois-Rivières was founded at de Champlain's request in 1634. Paul de Chomedey de Maisonneuve founded Ville-Marie (now Montreal) in 1642. In 1663, the Company of New France ceded Canada to King Louis XIV, who made New France into a royal province of France. New France was now a true colony administered by the Sovereign Council of New France from Quebec City. A governor-general, governed Canada and its administrative dependencies: Acadia, Louisiana and Plaisance. The French settlers were mostly farmers and known as "Canadiens" or "Habitants". Though there was little immigration, the colony grew because of the Habitants' high birth rates. In 1665, the Carignan-Salières regiment developed the string of fortifications known as the "Valley of Forts" to protect against Iroquois invasions and brought with them 1,200 new men. To redress the gender imbalance and boost population growth, King Louis XIV sponsored the passage of approximately 800 young French women (King's Daughters) to the colony. In 1666, intendant Jean Talon organized the first census and counted 3,215 Habitants. Talon enacted policies to diversify agriculture and encourage births, which, in 1672, had increased the population to 6,700. New France's territory grew to extend from Hudson Bay to the Gulf of Mexico, and would encompass the Great Lakes. In the early 1700s, Governor Callières concluded the Great Peace of Montreal, which not only confirmed the alliance between the Algonquian and New France, but definitively ended the Iroquois Wars. From 1688 onwards, the fierce competition between the French and British to control North America's interior and monopolize fur trade pitted New France and its Indigenous allies against the Iroquois and English in four successive wars called the French and Indian Wars by Americans, and the Intercolonial Wars in Quebec. The first three were King William's War (1688–1697), Queen Anne's War (1702–1713), and King George's War (1744–1748). In 1713, following the Peace of Utrecht, the Duke of Orléans ceded Acadia and Plaisance Bay to Great Britain, but retained Île Saint-Jean, and Île-Royale where the Fortress of Louisbourg was subsequently erected. These losses were significant since Plaisance Bay was the primary communication route between New France and France, and Acadia contained 5,000 Acadians. In the siege of Louisbourg (1745), the British were victorious, but returned the city to France after war concessions. The last of the four French and Indian Wars was the Seven Years' War ("The War of the Conquest" in Quebec) and lasted from 1754 to 1763. In 1754, tensions escalated for control of the Ohio Valley, as authorities in New France became more aggressive in efforts to expel British traders and colonists. In 1754, George Washington launched a surprise attack on a group of sleeping Canadien soldiers, known as the Battle of Jumonville Glen, the first battle of the war. In 1755, Governor Charles Lawrence and Officer Robert Monckton ordered the forceful expulsion of the Acadians. In 1758, on Île-Royale, British General James Wolfe besieged and captured the Fortress of Louisbourg. This allowed him to control access to the Gulf of St. Lawrence through the Cabot Strait. In 1759, he besieged Quebec for three months from Île d'Orléans. Then, Wolfe stormed Quebec and fought against Montcalm for control of the city in the Battle of the Plains of Abraham. After a British victory, the king's lieutenant and Lord of Ramezay concluded the Articles of Capitulation of Quebec. During the spring of 1760, the Chevalier de Lévis besieged Quebec City and forced the British to entrench themselves during the Battle of Sainte-Foy. However, loss of French vessels sent to resupply New France after the fall of Quebec City during the Battle of Restigouche marked the end of France's efforts to retake the colony. Governor Pierre de Rigaud, marquis de Vaudreuil-Cavagnial signed the Articles of Capitulation of Montreal on September 8, 1760. While awaiting the results of the Seven Years' War in Europe, New France was put under a British military regime led by Governor James Murray. In 1762, Commander Jeffery Amherst ended the French presence in Newfoundland at the Battle of Signal Hill. France secretly ceded the western part of Louisiana and the Mississippi River Delta to Spain via the Treaty of Fontainebleau. On February 10, 1763, the Treaty of Paris concluded the war. France ceded its North American possessions to Great Britain. Thus, France had put an end to New France and abandoned the remaining 60,000 Canadiens, who sided with the Catholic clergy in refusing to take an oath to the British Crown. The rupture from France would provoke a transformation within the descendants of the Canadiens that would eventually result in the birth of a new nation. After the British acquired Canada in 1763, the British government established a constitution for the newly acquired territory, under the Royal Proclamation. The Canadiens were subordinated to the government of the British Empire and circumscribed to a region of the St. Lawrence Valley and Anticosti Island called the Province of Quebec. With unrest growing in their southern colonies, the British were worried that the Canadiens might support what would become the American Revolution. To secure allegiance to the British crown, Governor James Murray and later Governor Guy Carleton promoted the need for accommodations, resulting in the enactment of the Quebec Act of 1774. This act allowed Canadiens to regain their civil customs, return to the seigneural system, regain certain rights including use of French, and reappropriate their old territories: Labrador, the Great Lakes, the Ohio Valley, Illinois Country and the Indian Territory. As early as 1774, the Continental Congress of the separatist Thirteen Colonies attempted to rally the Canadiens to its cause. However, its military troops failed to defeat the British counteroffensive during its Invasion of Quebec in 1775. Most Canadiens remained neutral, though some regiments allied themselves with the Americans in the Saratoga campaign of 1777. When the British recognized the independence of the rebel colonies at the signing of the Treaty of Paris of 1783, it ceded Illinois and the Ohio Valley to the newly formed United States and denoted the 45th parallel as its border, drastically reducing Quebec's size. Some United Empire Loyalists from the US migrated to Quebec and populated various regions. Dissatisfied with the legal rights under the French seigneurial regime which applied in Quebec, and wanting to use the British legal system to which they were accustomed, the Loyalists protested to British authorities until the Constitutional Act of 1791 was enacted, dividing the Province of Quebec into two distinct colonies starting from the Ottawa River: Upper Canada to the west (predominantly Anglo-Protestant) and Lower Canada to the east (Franco-Catholic). Lower Canada's lands consisted of the coasts of the Saint Lawrence River, Labrador and Anticosti Island, with the territory extending north to Rupert's Land, and south, east and west to the borders with the US, New Brunswick, and Upper Canada. The creation of Upper and Lower Canada allowed Loyalists to live under British laws and institutions, while Canadiens could maintain their French civil law and Catholic religion. Governor Haldimand drew Loyalists away from Quebec City and Montreal by offering free land on the north shore of Lake Ontario to anyone willing to swear allegiance to George III. During the War of 1812, Charles-Michel de Salaberry became a hero by leading the Canadian troops to victory at the Battle of the Chateauguay. This loss caused the Americans to abandon the Saint Lawrence Campaign, their major strategic effort to conquer Canada. Gradually, the Legislative Assembly of Lower Canada, who represented the people, came into conflict with the superior authority of the Crown and its appointed representatives. Starting in 1791, the government of Lower Canada was criticized and contested by the Parti canadien. In 1834, the Parti canadien presented its 92 resolutions, political demands which expressed loss of confidence in the British monarchy. Discontentment intensified throughout the public meetings of 1837, and the Lower Canada Rebellion began in 1837. In 1837, Louis-Joseph Papineau and Robert Nelson led residents of Lower Canada to form an armed group called the Patriotes. They declared independence in 1838, guaranteeing rights and equality for all citizens without discrimination. Their actions resulted in rebellions in both Lower and Upper Canada. The Patriotes were victorious in their first battle, the Battle of Saint-Denis. However, they were unorganized and badly equipped, leading to their loss against the British army in the Battle of Saint-Charles, and defeat in the Battle of Saint-Eustache. In response to the rebellions, Lord Durham was asked to undertake a study and prepare a report offering a solution to the British Parliament. Durham recommended that Canadiens be culturally assimilated, with English as their only official language. To do this, the British passed the Act of Union 1840, which merged Upper Canada and Lower Canada into a single colony: the Province of Canada. Lower Canada became the francophone and densely populated Canada East, and Upper Canada became the anglophone and sparsely populated Canada West. This union, unsurprisingly, was the main source of political instability until 1867. Despite their population gap, Canada East and Canada West obtained an identical number of seats in the Legislative Assembly of the Province of Canada, which created representation problems. In the beginning, Canada East was underrepresented because of its superior population size. Over time, however, massive immigration from the British Isles to Canada West occurred. Since the two regions continued to have equal representation, this meant it was now Canada West that was under-represented. The representation issues were called into question by debates on "representation by population". Around this period, the British population appropriated the term Canadian to refer to themselves, referring to Canada, their place of residence. The French population, who had thus far been "the Canadians", began to be identified with their ethnic community under the name "French Canadian" as they were a "French of Canada". As access to new lands remained problematic because they were still monopolized by the Château Clique, an exodus of Canadiens towards New England began and went on for the next hundred years. This phenomenon is known as the Grande Hémorragie and threatened the survival of the Canadien nation. The massive British immigration ordered from London that followed the failed rebellion, compounded this. To combat it, the Church adopted the revenge of the cradle policy. In 1844, the capital of the Province of Canada was moved from Kingston to Montreal. During Ireland's Great Potato Famine (1845–1852), nearly 100,000 Irish refugees passed through Grosse Isle's quarantine station, with many settling in Quebec and integrating into French-Canadian society. Political unrest came to a head in 1849, when English Canadian rioters set fire to the Parliament Building in Montreal following the enactment of the Rebellion Losses Bill, a law that compensated French Canadians whose properties were destroyed during the rebellions of 1837–1838. This bill, resulting from the Baldwin-La Fontaine coalition and Lord Elgin's advice, was important as it established the notion of responsible government. In 1854, the seigneurial system was abolished, the Grand Trunk Railway was built, and the Canadian–American Reciprocity Treaty was implemented. In 1866, the Civil Code of Lower Canada was adopted. In 1864, negotiations began for Canadian Confederation between the Province of Canada, New Brunswick and Nova Scotia at the Charlottetown Conference and Quebec Conference. After having fought as a Patriote, George-Étienne Cartier entered politics in the Province of Canada, becoming one of the co-premiers and advocate for the union of the British North American provinces. He became a leading figure at the Quebec Conference, which produced the Quebec Resolutions, the foundation for Canadian Confederation. Recognized as a Father of Confederation, he successfully argued for the establishment of the province of Quebec, initially composed of the historic heart of the territory of the French Canadian nation and where French Canadians would most likely retain majority status. Following the London Conference of 1866, the Quebec Resolutions were implemented as the British North America Act, 1867, and brought into force on July 1, 1867, creating Canada, composed of four founding provinces: New Brunswick, Nova Scotia, Ontario and Quebec. These last two came from splitting the Province of Canada, and used the old borders of Lower Canada for Quebec, and Upper Canada for Ontario. On July 15, 1867, Pierre-Joseph-Olivier Chauveau became Quebec's first premier. Between the late 19th and the late 20th century, Montreal was Canada's and Quebec's most populous city, including its economic and cultural centres. It was, as such, often among the first to adopt new technologies. It launched Canada's first public transit system in 1861 with horse-drawn streetcars, started a telephone service in 1878, and received electricity in 1885. The new Dominion quickly became interested in expansionism, especially westward, purchasing Rupert's Land from the Hudson's Bay Company in 1870. In 1885, it fought against the francophone Métis in the North-West Rebellion, and did not grant clemency to Louis Riel, their leader, after he was sentenced to death. This caused several Quebec liberal and conservative MLAs to form the Parti National out of anger. This, in combination with the Manitoba Schools Question, also helped turn the promotion and defence of the rights of French Canadians into an important concern. Gradually gaining in popularity, clerico-nationalists – who promoted the Triple Ideal of Catholicism, French, and rural life, alongside other traditional values (e.g., traditional gender roles, resistance to cultural assimilation, anti-progressivism, hierarchy) – went on to wield significant influence until the 1960s. Montreal continued its expansions into new advances by introducing streetcars in 1892 and seeing bikes and automobiles populate its roads by the 1890s and 1900s respectively. The Canadian Parliament, meanwhile, expanded Quebec in 1898 by enacting the Quebec Boundary Extension Act, 1898, which gave Quebec part of Rupert's Land. Under the aegis of the Catholic Church and the political action of Henri Bourassa, symbols of French Canadian national pride were developed, like the Flag of Carillon, and "O Canada" – a patriotic song composed for Saint-Jean-Baptiste Day. Many organizations went on to consecrate the affirmation of the French-Canadian people, including the caisses populaires Desjardins in 1900, the Club de hockey Canadien in 1909, Le Devoir in 1910, the Congress on the French language in Canada in 1912, and L'Action nationale in 1917. In 1909, the Quebec government passed a law obligating wood and pulp to be transformed in Quebec, which helped slow the Grande Hémorragie by allowing Quebec to export its finished products to the US instead of its labourers. In 1910, Armand Lavergne passed the Lavergne Law, the first language legislation in Quebec, which required the use of French alongside English on tickets, documents, bills and contracts issued by transportation and public utility companies. At this time, companies rarely recognized the majority language of Quebec. This movement may be what ensured Ontario's Regulation 17 (1912–1927) was fought against until its repeal. In 1912, the Canadian Parliament enacted the Quebec Boundaries Extension Act, 1912, which gave Quebec its final extension: another part of Rupert's Land called the District of Ungava. Quebec's borders now met the Hudson Strait and vaguely interlapped with Labrador's. When the First World War broke out in 1914, Canada was automatically involved and many English Canadians volunteered. However, because they did not feel the same connection to the British Empire and there was no direct threat to Canada, most French Canadians saw no reason to fight. By late 1916, casualties and waning numbers of volunteers were beginning to cause reinforcement problems. After enormous difficulty in the federal government, because almost every French-speaking MP opposed conscription while almost all English-speaking MPs supported it, the Military Service Act, 1917 became law on August 29, 1917. French Canadians protested in the Conscription Crisis of 1917, which led to the Quebec riot [fr]. In 1919, the prohibition of spirits was enacted following a provincial referendum. In 1920, Montreal hosted Canada’s first public radio broadcast. Then, in 1921, prohibition was abolished by the Alcoholic Beverages Act, which created the SAQ and allowed the government to control the sale of alcohol. This resulted in Quebec having the shortest and lightest prohibition in North America, as well as reaping huge profits from the sale of booze to tourists. Since the location of the border between Canada and Labrador had never been clear, in 1927, the British Judicial Committee of the Privy Council gathered to draw one. However, the Quebec government did not recognize the ruling, resulting in a boundary dispute which remains ongoing. In 1931, the Statute of Westminster was enacted, which confirmed the autonomy of the Dominions – including Canada and its provinces – from the UK, as well as their free association in the Commonwealth of Nations. In the 1930s, Quebec's economy was affected by the Great Depression because it greatly reduced US demand for Quebec exports. Between 1929 and 1932, the unemployment rate increased from 8% to 26%. In an attempt to remedy this, the Quebec government enacted infrastructure projects, campaigns to colonize distant regions, financial assistance to farmers, and the secours directs – the ancestor to Canada's Employment Insurance. The poor work opportunities in the US also finally ended the Grande Hémorragie. French Canadians remained opposed to conscription during the Second World War. When Canada declared war in September 1939, the federal government pledged not to conscript soldiers for overseas service. As the war went on, more and more English Canadians voiced support for conscription, despite firm opposition from French Canada. Following a 1942 poll that showed 73% of Quebec's residents were against conscription, while 80% or more were for conscription in every other province, the federal government passed Bill 80, which allowed conscription for overseas service. In the Conscription Crisis of 1944 the Bloc Populaire emerged to fight conscription. The stark differences between the values of French and English Canada popularized the expression the "Two Solitudes". In the wake of the conscription crisis, Maurice Duplessis of the Union Nationale rose to power once more. His government emphasized clerico-nationalist values and implemented conservative policies now known as the Grande Noirceur. These included defending provincial autonomy, promoting Quebec's Catholic and francophone heritage, and favouring laissez-faire capitalism over the emerging welfare state. However, with accelerating major changes such as the appearance of television, baby boom, workers' conflicts, electrification of the countryside, emergence of a middle class, rural exodus and urbanization, expansion of universities and bureaucracies, creation of motorways, and renaissance of literature and poetry, French Canadian society began to develop new ideologies and aspirations. The Quiet Revolution was an intense period of modernization, secularization and social reform, where French Canadians strongly expressed their concern and dissatisfaction with their inferior socioeconomic position, and the cultural assimilation of francophone minorities in the English-majority provinces. It resulted in the formation of the modern Québécois identity and Quebec nationalism. In 1960, Jean Lesage's Liberal Party was brought to power with a two-seat majority, having campaigned with the slogan "It's time for things to change". This government fundamentally restructured Quebec's institutions, creating a modern welfare state through new ministries for education, social affairs, and economic development. It created the CDPQ, Ministry of Education, OQLF, Régie des rentes and Société générale de financement, and modernized the Labour Code and Ministry of Social Affairs. In 1962, the government dismantled the financial syndicates of Montreal's Saint Jacques Street to weaken the grip of the English-Canadian traditional economic elites. Also in 1962, Natural Resources Minister René Lévesque led the nationalization of Quebec's private electricity companies to create a unified Hydro-Québec. This massive project was estimated at over $600 million for the acquisition of eleven companies. The Quiet Revolution was particularly characterized by the 1962 Liberal Party's slogan "Masters in our own house", which, to the Anglo-American conglomerates that dominated the economy and natural resources, announced a collective will for freedom of the French-Canadian people. As a result of confrontations between the lower clergy and the laity, state institutions began to deliver services without the assistance of the church, and many parts of civil society began to be more secular. In 1965, the Royal Commission on Bilingualism and Biculturalism wrote a preliminary report underlining Quebec's distinct character, and promoted open federalism, a political attitude guaranteeing Quebec a minimum amount of consideration. To favour Quebec during its Quiet Revolution, Lester B. Pearson adopted a policy of open federalism. In 1966, the Union Nationale was re-elected and continued on with major reforms. In 1967, René Lévesque introduced the concept of sovereignty-association in his manifesto Option Quebec, proposing political independence with economic partnership including a common currency, free trade, and joint institutions. It sparked a constitutional debate on the political future of the province by pitting federalist and sovereignist doctrines against each other. The meetings of the Estates General of French Canada in 1967 marked a tipping point where relations between Quebec and other francophones of Canada ruptured. This deeply affected both parties by fracturing the pan-Canadian French-Canadian identity that had existed before then into: Quebec nationalism, and several minority francophone groups elsewhere. Also in 1967, President of France Charles de Gaulle visited Quebec, to attend Expo 67. There, he addressed a crowd of more than 100,000, making a speech ending with the exclamation: "Long live free Quebec". This declaration had a profound effect on Quebec by bolstering the burgeoning modern Quebec sovereignty movement and resulting in a diplomatic crisis between France and Canada. Following this, various civilian groups developed, sometimes confronting public authority, for example in the October Crisis of 1970. In 1968, class conflicts and changes in mentalities intensified. Quebec artists also started celebrating their distinct identity: Michel Tremblay's 1968 play Les Belles-sœurs legitimized joual (working-class Quebec French) as a literary language, singer-songwriters like Félix Leclerc and Gilles Vigneault started a new style of Quebec popular music, and many local films began to be produced. In 1969, the federal Official Languages Act was passed to introduce a linguistic context conducive to Quebec's development. In 1973, the liberal government of Robert Bourassa initiated the James Bay Project on La Grande River. In 1974, it enacted the Official Language Act, which made French the official language of Quebec. In 1975, it established the Charter of Human Rights and Freedoms and the James Bay and Northern Quebec Agreement. Quebec's first modern sovereignist government, led by René Lévesque, materialized when the Parti Québécois was brought to power in the 1976 Quebec general election. The Charter of the French Language came into force the following year, which increased the use of French. Between 1966 and 1969, the Estates General of French Canada confirmed the state of Quebec to be the nation's fundamental political milieu and for it to have the right to self-determination. In the 1980 referendum on sovereignty-association, 40% were for and 60% were against. After the referendum, Lévesque went back to Ottawa to continue negotiating constitutional changes. On November 4, 1981, the Kitchen Accord took place. Delegations from the other nine provinces and the federal government reached an agreement in the absence of Quebec's delegation, which had left for the night. Because of this, the National Assembly refused to recognize the new Constitution Act, 1982, which patriated the Canadian constitution and made modifications to it. The 1982 amendments apply to Quebec despite Quebec never having consented to it. Between 1982 and 1992, the Quebec government's attitude changed to prioritize reforming the federation. Attempts at constitutional amendments by the Mulroney and Bourassa governments ended in failure with the Meech Lake Accord of 1987 and the Charlottetown Accord of 1992, resulting in the creation of the Bloc Québécois. The failures also led to the re-election of the Parti Québécois in 1994, and the return to power of Jacques Parizeau, who had promised to hold a sovereignty referendum within a year of election. In 1995, Parizeau called a referendum on Quebec's independence from Canada. This consultation ended in a close outcome: 50.6% "no" and 49.4% "yes" (notably, over 60% of francophones voted "yes" and over 90% of anglophones voted "no"). In 1996, the federal government launched the Sponsorship Program to increase federal visibility in Quebec. In 2000, following the Supreme Court of Canada's decision on the Reference Re Secession of Quebec, the Parliament of Canada passed a legal framework, called the Clarity Act, within which governments would act in another referendum. In 2002, the Gomery commission and media revealed the Sponsorship Program, in which $539,000 was illegally spent and where well-connected agencies received millions for minimal work. This scandal contributed to the Liberals' defeat in the 2006 federal election. On October 30, 2003, the National Assembly voted unanimously to affirm "that the people of Québec form a nation". On November 27, 2006, the House of Commons followed with a symbolic motion declaring "that this House recognize that the Québécois form a nation within a united Canada." In 2007, the Parti Québécois was pushed back to official opposition in the National Assembly, with the Liberal party leading. During the 2011 Canadian federal elections, Quebec voters rejected the Bloc Québécois in favour of the previously minor New Democratic Party (NDP). As the NDP's logo is orange, this was called the "orange wave". In 2012, the Liberal party, led by Jean Charest, announced an increase in student tuition fees. This spawned months-long protests involving over 300,000 students known as the Maple Spring, ultimately leading to a rollback of the increases. Also partially as a result, the Liberal party fell out of favour, letting the Parti Québécois regain power in 2012 and its leader, Pauline Marois, to become the first female premier of Quebec. The Liberal Party of Quebec then returned to power in 2014. Then, in 2018, the Coalition Avenir Québec (CAQ) won the provincial general elections. Between 2020 and 2021, Quebec took measures against the COVID-19 pandemic. In 2022, the CAQ, led by Quebec's premier François Legault, increased its parliamentary majority in the provincial general elections. In 2025, following the implementation of tariffs and aggressive rhetoric by the United States president Donald Trump, Quebecers decreased their travel to the US, banned the sale of American alcohol, and slightly reduced personal purchases of US items. Geography Located in the eastern part of Canada, Quebec occupies a territory nearly three times the size of France. It holds an area of 1.5 million square kilometres (0.58 million square miles) and its borders are more than 12,000 km (7,500 mi) long. Most of Quebec is very sparsely populated.[citation needed] The most populous physiographic region is the Great Lakes–St. Lawrence Lowlands. The combination of rich soils and the lowlands' relatively warm climate makes this valley the most prolific agricultural area of Quebec. The rural part of the landscape is divided into narrow rectangular tracts of land that extend from the river and date back to the seigneurial system. Quebec's topography is very different from one region to another due to the varying composition of the ground, the climate, and the proximity to water. More than 95% of Quebec's territory, including the Labrador Peninsula, lies within the Canadian Shield. It is generally a quite flat and exposed mountainous terrain interspersed with higher points such as the Laurentian Mountains in southern Quebec, the Otish Mountains in central Quebec and the Torngat Mountains near Ungava Bay. While low and medium altitude peaks extend from western Quebec to the far north, high altitudes mountains emerge in the Capitale-Nationale region to the extreme east. Quebec's highest point at 1,652 metres (5,420 ft) is Mont d'Iberville, known in English as Mount Caubvick. In the Labrador Peninsula portion of the Shield, the far northern region of Nunavik includes the Ungava Peninsula and consists of flat Arctic tundra inhabited mostly by the Inuit. Further south is the Eastern Canadian Shield taiga ecoregion and the Central Canadian Shield forests. The Appalachian region has a narrow strip of ancient mountains along the southeastern border of Quebec. Quebec has one of the world's largest reserves of fresh water, occupying 12% of its surface and representing 3% of the world's renewable fresh water. More than half a million lakes and 4,500 rivers empty into the Atlantic Ocean, through the Gulf of Saint Lawrence and the Arctic Ocean, by James, Hudson, and Ungava bays. The largest inland body of water is the Caniapiscau Reservoir; Lake Mistassini is the largest natural lake. The Saint Lawrence River has some of the world's largest sustaining inland Atlantic ports. Since 1959, the Saint Lawrence Seaway has provided a navigable link between the Atlantic Ocean and the Great Lakes. The public lands of Quebec cover approximately 92% of its territory, including almost all of the bodies of water. Protected areas can be classified into about twenty different legal designations (ex. exceptional forest ecosystem, protected marine environment, national park, biodiversity reserve, wildlife reserve, zone d'exploitation contrôlée (ZEC), etc.). More than 2,500 sites in Quebec today are protected areas. As of 2013, protected areas comprise 9.14% of Quebec's territory. The ecological classification of Quebec territory established by the Ministry of Forests, Wildlife and Parks 2021, is presented in nine levels, it includes the diversity of terrestrial ecosystems throughout Quebec while taking into account both the characteristics of the vegetation (physiognomy, structure and composition) and the physical environment (relief, geology, geomorphology, hydrography). In general, the climate of Quebec is cold and humid, with variations determined by latitude, maritime and elevation influences. Because of the influence of both storm systems from the core of North America and the Atlantic Ocean, precipitation is abundant throughout the year, with most areas receiving more than 1,000 mm (39 in) of precipitation, including over 300 cm (120 in) of snow in many areas. During the summer, severe weather patterns (such as tornadoes and severe thunderstorms) occur occasionally. Quebec is divided into four climatic zones: arctic, subarctic, humid continental and East maritime. From south to north, average temperatures range in summer between 25 and 5 °C (77 and 41 °F) and, in winter, between −10 and −25 °C (14 and −13 °F). In periods of intense heat and cold, temperatures can reach 35 °C (95 °F) in the summer and −40 °C (−40 °F) during the Quebec winter, Most of central Quebec, ranging from 51 to 58 degrees North has a subarctic climate (Köppen Dfc). Winters are long, very cold, and snowy, and among the coldest in eastern Canada, while summers are warm but very short due to the higher latitude and the greater influence of Arctic air masses. Precipitation is also somewhat less than farther south, except at some of the higher elevations. The northern regions of Quebec have an arctic climate (Köppen ET), with very cold winters and short, much cooler summers. The primary influences in this region are the Arctic Ocean currents (such as the Labrador Current) and continental air masses from the High Arctic. The all-time record high temperature was 40.0 °C (104.0 °F) and the all-time record low was −51.0 °C (−59.8 °F). The all-time record of the greatest precipitation in winter was established in winter 2007–2008, with more than five metres of snow in the area of Quebec City. March 1971, however, saw the "Century's Snowstorm" with more than 40 cm (16 in) in Montreal to 80 cm (31 in) in Mont Apica of snow within 24 hours in many regions of southern Quebec. The winter of 2010 was the warmest and driest recorded in more than 60 years. Given the geology of the province and its different climates, there are a number of large areas of vegetation in Quebec. These areas, listed in order from the northernmost to the southernmost are: the tundra, the taiga, the Canadian boreal forest (coniferous), mixed forest and deciduous forest. On the edge of Ungava Bay and Hudson Strait is the tundra, whose flora is limited to lichen with less than 50 growing days per year. Further south, the climate is conducive to the growth of the Canadian boreal forest, bounded on the north by the taiga. Not as arid as the tundra, the taiga is associated with the subarctic regions of the Canadian Shield and is characterized by a greater number of both plant (600) and animal (206) species. The taiga covers about 20% of the total area of Quebec. The Canadian boreal forest is the northernmost and most abundant of the three forest areas in Quebec that straddle the Canadian Shield and the upper lowlands of the province. Given a warmer climate, the diversity of organisms is also higher: there are about 850 plant species and 280 vertebrate species. The mixed forest is a transition zone between the Canadian boreal forest and deciduous forest. This area contains a diversity of plant (1000) and vertebrates (350) species, despite relatively cool temperatures. The ecozone mixed forest is characteristic of the Laurentians, the Appalachians and the eastern lowland forests. The third most northern forest area is characterized by deciduous forests. Because of its climate, this area has the greatest diversity of species, including more than 1600 vascular plants and 440 vertebrates. The total forest area of Quebec is estimated at 750,300 km2 (289,700 sq mi). From the Abitibi-Témiscamingue to the North Shore, the forest is composed primarily of conifers such as the Abies balsamea, the jack pine, the white spruce, the black spruce and the tamarack. The deciduous forest of the Great Lakes–St. Lawrence Lowlands is mostly composed of deciduous species such as the sugar maple, the red maple, the white ash, the American beech, the butternut (white walnut), the American elm, the basswood, the bitternut hickory and the northern red oak as well as some conifers such as the eastern white pine and the northern whitecedar. The distribution areas of the paper birch, the trembling aspen and the mountain ash cover more than half of Quebec's territory. Biodiversity of the estuary and gulf of Saint Lawrence River includes aquatic mammal wildlife, such as the blue whale, the beluga, the minke whale and the harp seal (earless seal). The Nordic marine animals include the walrus and the narwhal. Inland waters are populated by small to large freshwater fish, such as the largemouth bass, the American pickerel, the walleye, the Acipenser oxyrinchus, the muskellunge, the Atlantic cod, the Arctic char, the brook trout, the Microgadus tomcod (tomcod), the Atlantic salmon, and the rainbow trout. Among the birds commonly seen in the southern part of Quebec are the American robin, the house sparrow, the red-winged blackbird, the mallard, the common grackle, the blue jay, the American crow, the black-capped chickadee, some warblers and swallows, the starling and the rock pigeon. Avian fauna includes birds of prey like the golden eagle, the peregrine falcon, the snowy owl and the bald eagle. Sea and semi-aquatic birds seen in Quebec are mostly the Canada goose, the double-crested cormorant, the northern gannet, the European herring gull, the great blue heron, the sandhill crane, the Atlantic puffin and the common loon. The large land wildlife includes the white-tailed deer, the moose, the muskox, the caribou (reindeer), the American black bear and the polar bear. The medium-sized land wildlife includes the cougar, the coyote, the eastern wolf, the bobcat, the Arctic fox, the fox, etc. The small animals seen most commonly include the eastern grey squirrel, the snowshoe hare, the groundhog, the skunk, the raccoon, the chipmunk and the Canadian beaver. Government and politics Quebec is founded on the Westminster system, and is both a liberal democracy and a constitutional monarchy with parliamentary regime. The head of government in Quebec is the premier (called premier ministre in French), who leads the largest party in the unicameral National Assembly (Assemblée Nationale) from which the Executive Council of Quebec is appointed. The Conseil du trésor supports the ministers of the Executive Council in their function of stewardship of the state. The lieutenant governor represents the King of Canada. Quebec has 78 members of Parliament (MPs) in the House of Commons of Canada. They are elected in federal elections. At the level of the Senate of Canada, Quebec is represented by 24 senators, which are appointed on the advice of the prime minister of Canada. The Quebec government holds administrative and police authority in its areas of exclusive jurisdiction. The Parliament of the 43rd legislature is made up of the following parties: Coalition Avenir Québec (CAQ), Parti libéral du Québec (PLQ), Québec solidaire (QS) and Parti Québécois (PQ), as well as an independent member. There are 25 official political parties in Quebec. Quebec has a network of three offices for representing itself and defending its interests within Canada: one in Moncton for all provinces east, one in Toronto for all provinces west, and one in Ottawa for the federal government. These offices' mandate is to ensure an institutional presence of the Government of Quebec near other Canadian governments. Quebec's territory is divided into 17 administrative regions as follows: The province also has the following divisions: For municipal purposes, Quebec is composed of: Quebec's constitution is enshrined in a series of social and cultural traditions that are defined in a set of judicial judgments and legislative documents, including the Loi sur l'Assemblée Nationale ("Law on the National Assembly"), the Loi sur l'éxecutif ("Law on the Executive"), and the Loi électorale du Québec ("Electoral Law of Quebec"). Other notable examples include the Charter of Human Rights and Freedoms, the Charter of the French language, and the Civil Code of Quebec. Quebec's international policy is founded upon the Gérin-Lajoie doctrine [fr], formulated in 1965. While Quebec's Ministry of International Relations coordinates international policy, Quebec's general delegations are the main interlocutors in foreign countries. Quebec is the only Canadian province that has set up a ministry to exclusively embody the state's powers for international relations. Since 2006, Quebec has adopted a green plan to meet the objectives of the Kyoto Protocol regarding climate change. The Ministry of Sustainable Development, Environment, and Fight Against Climate Change (MELCC) is the primary entity responsible for the application of environmental policy. The Société des établissements de plein air du Québec (SEPAQ) is the main body responsible for the management of national parks and wildlife reserves. Nearly 500,000 people took part in a climate protest on the streets of Montreal in 2019. Agriculture in Quebec has been subject to agricultural zoning regulations since 1978. Faced with the problem of expanding urban sprawl, agricultural zones were created to ensure the protection of fertile land, which make up 2% of Quebec's total area. Quebec's forests [fr] are essentially public property. The calculation of annual cutting possibilities is the responsibility of the Bureau du forestier en chef. The Union des producteurs agricoles (UPA) seeks to protect the interests of its members, including forestry workers, and works jointly with the Ministry of Agriculture, Fisheries and Food (MAPAQ) and the Ministry of Energy and Natural Resources. The Ministère de l'Emploi et de la Solidarité sociale du Québec has the mandate to oversee social and workforce developments through Emploi-Québec and its local employment centres (CLE). This ministry is also responsible for managing the Régime québécois d'assurance parentale (QPIP) as well as last-resort financial support for people in need. The Commission des normes, de l'équité, de la santé et de la sécurité du travail [fr] (CNESST) is the main body responsible for labour laws in Quebec and for enforcing agreements concluded between unions of employees and their employers. Revenu Québec is the body responsible for collecting taxes. It takes its revenue through a progressive income tax, a 9.975% sales tax, various other provincial taxes (ex. carbon, corporate and capital gains taxes), equalization payments, transfer payments from other provinces, and direct payments. By some measures Quebec residents are the most taxed; a 2012 study indicated that "Quebec companies pay 26 per cent more in taxes than the Canadian average". Quebec's immigration philosophy is based on the principles of pluralism and interculturalism.The Ministère de l'Immigration et des Communautés culturelles du Québec is responsible for the selection and integration of immigrants. Programs favour immigrants who know French, have a low risk of becoming criminals and have in-demand skills. Quebec's health and social services network is administered by the Ministry of Health and Social Services. It is composed of 95 réseaux locaux de services (RLS; 'local service networks') and 18 agences de la santé et des services sociaux (ASSS; 'health and social services agencies'). Quebec's health system is supported by the Régie de l'assurance maladie du Québec (RAMQ) which works to maintain the accessibility of services for all citizens of Quebec. The Ministère de la Famille et des Aînés du Québec operate centres de la petite enfance [fr] (CPEs; 'centres for young children'). Quebec provides universal low-fee childcare for all children under 12. Quebec's education system is administered by the Ministry of Education and Higher Education (primary and secondary schools), the Ministère de l'Enseignement supérieur (CEGEP) and the Conseil supérieure de l'Education du Québec (universities and colleges). In 2012, the annual cost for postsecondary tuition was CA$2,168 (€1,700) – less than half of Canada's average tuition. Part of the reason for this is that tuition fees were frozen to a relatively low level when CEGEPS were created during the Quiet Revolution. When Jean Charest's government decided in 2012 to sharply increase university fees, students protests erupted. Because of these protests, Quebec's tuition fees remain relatively low. Quebec's closest international partner is the United States, with which it shares a long and positive history. Products of American culture like songs, movies, fashion and food strongly affect Québécois culture. Quebec has a historied relationship with France, as Quebec was a part of the French Empire and both regions share a language. The Fédération France-Québec [fr] and the Francophonie are a few of the tools used for relations between Quebec and France. In Paris, a place du Québec was inaugurated in 1980. Quebec also has a historied relationship with the United Kingdom, having been a part of the British Empire. Quebec and the UK share the same head of state, King Charles III. Quebec has a network of 32 offices in 18 countries. These offices serve the purpose of representing Quebec in foreign countries and are overseen by Quebec's Ministry of International Relations. Quebec, like other Canadian provinces, also maintains representatives in some Canadian embassies and consulates general. As of 2019[update], the Government of Quebec had delegates-general (agents-general) in Brussels, London, Mexico City, Munich, New York City, Paris and Tokyo; delegates to Atlanta, Boston, Chicago, Houston, Los Angeles, and Rome; and offices headed by directors offering more limited services in Barcelona, Beijing, Dakar, Hong Kong, Mumbai, São Paulo, Shanghai, Stockholm, and Washington. In addition, there are the equivalent of honorary consuls, titled antennes, in Berlin, Philadelphia, Qingdao, Seoul, and Silicon Valley. Quebec also has a representative to UNESCO and participates in the Organization of American States. Quebec is a member of the Assemblée parlementaire de la Francophonie and of the Organisation internationale de la francophonie. Law Quebec law is the shared responsibility of the federal and provincial government. The federal government is responsible for criminal law, foreign affairs and laws relating to the regulation of Canadian commerce, interprovincial transportation, and telecommunications. The provincial government is responsible for private law, the administration of justice, and several social domains, such as social assistance, healthcare, education, and natural resources. Quebec law is influenced by two judicial traditions (civil law and common law) and four classic sources of law (legislation, case law, doctrine and customary law). Private law in Quebec affects all relationships between individuals (natural or juridical persons) and is largely under the jurisdiction of the Parliament of Quebec. The Parliament of Canada also influences Quebec private law, in particular through its power over banks, bankruptcy, marriage, divorce and maritime law. The Droit civil du Québec [fr] is the primary component of Quebec's private law and is codified in the Civil Code of Quebec. Public law in Quebec is largely derived from the common law tradition. Quebec constitutional law governs the rules surrounding the Quebec government, the Parliament of Quebec and Quebec's courts. Quebec administrative law governs relations between individuals and the Quebec public administration. Quebec also has some limited jurisdiction over criminal law. Finally, Quebec, like the federal government, has tax law power. Certain portions of Quebec law are considered mixed. This is the case, for example, with human rights and freedoms which are governed by the Quebec Charter of Human Rights and Freedoms, a Charter which applies to both government and citizens. English is not an official language in Quebec law. However, both English and French are required by the Constitution Act, 1867 for the enactment of laws and regulations, and any person may use English or French in the National Assembly and the courts. The books and records of the National Assembly must also be kept in both languages. Although Quebec is a civil law jurisdiction, it does not follow the pattern of other civil law systems which have court systems divided by subject matter. Instead, the court system follows the English model of unitary courts of general jurisdiction. The provincial courts have jurisdiction to decide matters under provincial law as well as federal law, including civil, criminal and constitutional matters. The major exception to the principle of general jurisdiction is that the Federal Court and Federal Court of Appeal have exclusive jurisdiction over some areas of federal law, such as review of federal administrative bodies, federal taxes, and matters relating to national security. The Quebec courts are organized in a pyramid. At the bottom, there are the municipal courts, the Professions Tribunal, the Human Rights Tribunal, and administrative tribunals. Decisions of those bodies can be reviewed by the two trial courts, the Court of Quebec the Superior Court of Quebec. The Court of Quebec is the main criminal trial court, and also a court for small civil claims. The Superior Court is a trial court of general jurisdiction, in both criminal and civil matters. The decisions of those courts can be appealed to the Quebec Court of Appeal. Finally, if the case is of great importance, it may be appealed to the Supreme Court of Canada. The Court of Appeal serves two purposes. First, it is the general court of appeal for all legal issues from the lower courts. It hears appeals from the trial decisions of the Superior Court and the Quebec Court. It also can hear appeals from decisions rendered by those two courts on appeals or judicial review matters relating to the municipal courts and administrative tribunals. Second, but much more rarely, the Court of Appeal possesses the power to respond to reference questions posed to it by the Quebec Cabinet. The Court of Appeal renders more than 1,500 judgments per year. The Sûreté du Québec is the main police force of Quebec. The Sûreté du Québec can also serve a support and coordination role with other police forces, such as with municipal police forces or with the Royal Canadian Mounted Police (RCMP). The RCMP has the power to enforce certain federal laws in Quebec. However, given the existence of the Sûreté du Québec, its role is more limited than in the other provinces. Municipal police, such as the Service de police de la Ville de Montréal and the Service de police de la Ville de Québec, are responsible for law enforcement in their municipalities. The Sûreté du Québec fulfils the role of municipal police in the 1038 municipalities that do not have a municipal police force. The Indigenous communities of Quebec have their own police forces. For offences against provincial or federal laws in Quebec (including the Criminal Code), the Director of Criminal and Penal Prosecutions is responsible for prosecuting offenders in court through Crown attorneys. The Department of Justice of Canada also has the power to prosecute offenders, but only for offences against specific federal laws (ex. selling narcotics). Quebec is responsible for operating the prison system for sentences of less than two years, and the federal government operates penitentiaries for sentences of two years or more. Demographics In the 2021 census, Quebec's population was determined to be 8,501,833, a 4.1% increase from its 2016 population of 8,164,361. With a land area of 1,356,625.27 km2 (523,795.95 sq mi), it had a population density of 6.0/km2 (15.6/sq mi) in 2016. Quebec accounted for a little under 23% of the Canadian population. The largest cities in Quebec are Montreal (1,762,976), Quebec City (538,738), Laval (431,208), and Gatineau (281,501). In 2016, Quebec's median age was 41.2 years. As of 2020, 20.8% of the population was younger than 20, 59.5% was aged between 20 and 64, and 19.7% was 65 or older. In 2019, Quebec witnessed an increase in the number of births compared to the year before (84,200 vs 83,840) and had a total fertility rate of about 1.6 children born per woman. As of 2020, the average life expectancy was 82.3 years. Quebec in 2019 registered its highest rate of population growth since 1972, with an increase of 110,000 people, mostly because of the arrival of a high number of immigrants. As of 2019, most international immigrants were from China, India and France. In 2016, 30% of the population possessed a postsecondary degree or diploma. Most residents, particularly couples, are property owners. In 2016, 80% of both property owners and renters considered their housing to be "unaffordable". In the 2021 Canadian census, 29.3% of Quebec's population stated their ancestry was of Canadian origin and 21.1% stated their ancestry was of French origin. As of 2021, 18% of Quebec's population belonged to a visible minority group. According to the 2021 census, the most commonly cited religions in Quebec were: The Roman Catholic Church has long occupied a central and integral place in Quebec society since the foundation of Quebec City in 1608. However, since the Quiet Revolution, which secularized Quebec, irreligion has been growing significantly. Religions other than Christianity, Judaism and indigenous faiths were not present in Quebec before the 20th century. They started establishing a small presence following the passing of the Immigration Act of 1962. Islam in particular has grown rapidly since the 1990s due to high immigration levels. Its number of adherents increased from 44,930 (0.6% of the population) in 1991 to 421,715 (5.1%) in 2021. The oldest parish church in North America is the Cathedral-Basilica of Notre-Dame de Québec. Its construction began in 1647, when it was known under the name Notre-Dame-de-la-Paix, and it was finished in 1664. The most frequented place of worship in Quebec is the Basilica of Sainte-Anne-de-Beaupré. This basilica welcomes millions of visitors each year. Saint Joseph's Oratory is the largest place of worship in the world dedicated to Saint Joseph. Many pilgrimages include places such as Saint Benedict Abbey, Sanctuaire Notre-Dame-du-Cap [fr], Notre-Dame de Montréal Basilica, Marie-Reine-du-Monde de Montréal Basilica-Cathedral, Saint-Michel Basilica-Cathedral, and Saint-Patrick's Basilica. Another important place of worship in Quebec is the Anglican Holy Trinity Cathedral, which was erected between 1800 and 1804. It was the first Anglican cathedral built outside the British Isles. Quebec differs from other Canadian provinces in that French is its sole official language, while English predominates in the rest of Canada. French is the common language, understood and spoken by 93.7% of the population according to the 2021 Census, and is the sole native language of 74.8% of the population (or slightly more than 6.5 million residents) and a native language (alone or in combination with others) of 77.8%. This makes Quebec the only Canadian province whose population is mainly Francophone. Quebec French is the umbrella term for local variants of the language. Canada is estimated to be home to roughly 30 regional French accents, 17 of which can be found in Quebec. 42.2% of Quebec's population with a French mother tongue can converse in English, the predominant language of the rest of Canada. The Office québécois de la langue française oversees the application of linguistic policies respecting French on the territory, jointly with the Superior Council of the French Language and the Commission de toponymie du Québec. The foundation for these linguistic policies was created in 1968 by the Gendron Commission and they have been accompanied the Charter of the French language ("Bill 101") since 1977. The policies are in effect to protect Quebec from being assimilated by its English-speaking neighbours (the rest of Canada and the United States) and were also created to rectify historical injustice between the Francophone majority and Anglophone minority, the latter of which were favoured since Quebec was a colony of the British Empire. Quebec remains, alongside Haiti, the only major Francophone dominant regions in the Americas. Anglo-Quebecers, a name for residents whose main language is English, constitute the second largest linguistic group in Quebec. In 2021, English was the sole mother tongue of 7.6% of Quebec residents, and was a native language (alone or in combination with others) of 10.0%. Anglo-Quebecers reside mainly in the west of the island of Montreal (West Island), downtown Montreal and the Pontiac. Three families of Indigenous languages encompassing eleven languages exist in Quebec: the Algonquian language family (Abenaki, Algonquin, Maliseet-passamaquoddy, Mi'kmaq, and the linguistic continuum of Atikamekw, Cree, Innu-aimun, and Naskapi), the Inuit–Aleut language family (Nunavimmiutitut, an Inuktitut dialect spoken by the Inuit of Nord-du-Québec), and the Iroquoian language family (Mohawk and Wendat). In the 2016 census, 50,895 people said they knew at least one Indigenous language and 45,570 people declared having an Indigenous language as their mother tongue. In Quebec, most Indigenous languages are transmitted quite well from one generation to the next with a mother tongue retention rate of 92%. As of the 2016 census, the most common immigrant languages claimed as a native language were Arabic (2.5% of the total population), Spanish (1.9%), Italian (1.4%), Creole languages (mainly Haitian Creole) (0.8%), and Mandarin (0.6%). As of the 2021 Canadian Census, the ten most spoken languages in the province were French (spoken by 7,786,735 people, or 93.72% of the population), English (4,317,180 or 51.96%), Spanish (453,905 or 5.46%), Arabic (343,675 or 4.14%), Italian (168,040 or 2.02%), Haitian Creole (118,010 or 1.42%), Mandarin (80,520 or 0.97%), Portuguese (65,605 or 0.8%), Russian (55,485 or 0.7%), and Greek (50,375 or 0.6%). The question on knowledge of languages allows for multiple responses. In 2021, the Indigenous population of Quebec numbered 205,010 (2.5% of the population), including 15,800 Inuit, 116,550 First Nations people, and 61,010 Métis. There is an undercount, as some Indian bands regularly refuse to participate in Canadian censuses. In 2016, the Mohawk reserves of Kahnawake and Doncaster 17 along with the Indian settlement of Kanesatake and Lac-Rapide, a reserve of the Algonquins of Barriere Lake, were not counted. The Inuit of Quebec live mainly in Nunavik in Nord-du-Québec. They make up the majority of the population living north of the 55th parallel. There are ten First Nations ethnic groups in Quebec: the Abenaki, the Algonquin, the Attikamek, the Cree, the Wolastoqiyik, the Mi'kmaq, the Innu, the Naskapi, the Wendat and the Mohawk. The Mohawks were once part of the Iroquois Confederacy. Aboriginal rights were enunciated in the Indian Act and adopted at the end of the 19th century. This act confines First Nations within the reserves created for them. The Indian Act is still in effect today. In 1975, the Cree, Inuit and the Quebec government agreed to an agreement called the James Bay and Northern Quebec Agreement that would extended indigenous rights beyond reserves, and to over two-thirds of Quebec's territory. Because this extension was enacted without the participation of the federal government, the extended indigenous rights only exist in Quebec. In 1978, the Naskapis joined the agreement when the Northeastern Quebec Agreement was signed. Discussions have been underway with the Innu of the Côte-Nord and Saguenay–Lac-Saint-Jean for the potential creation of a similar autonomy in two new distinct territories that would be called Innu Assi and Nitassinan. A few political institutions have also been created over time: The subject of Acadians in Quebec is an important one as more than a million people in Quebec are of Acadian descent, with roughly 4.8 million people possessing one or multiple Acadian ancestors in their genealogy tree, because a large number of Acadians had fled Acadia to take refuge in Quebec during the Great Upheaval. Furthermore, more than a million people have a patronym of Acadian origin. Quebec houses Acadian communities. Acadians mainly live on the Magdalen Islands and in Gaspesia, but about thirty other communities are present elsewhere in Quebec, mostly in the Côte-Nord and Centre-du-Québec regions. An Acadian community in Quebec can be called a "Cadie", "Petite Cadie" or "Cadien". Economy Quebec has an advanced, market-based, and open economy. In 2022, its gross domestic product (GDP) was US$50,000 per person at purchasing power parity. The economy of Quebec is the 46th largest in the world behind Chile and 29th for GDP per person. Quebec represents 19% of the GDP of Canada. The provincial debt-to-GDP ratio peaked at 51% in 2012–2013, and declined to 43% in 2021. Like most industrialized countries, the economy is based mainly on the services sector. Quebec's economy has traditionally been fuelled by abundant natural resources and a well-developed infrastructure, but has undergone significant change over the past decade. Firmly grounded in the knowledge economy, Quebec has one of the highest growth rates of GDP in Canada. The knowledge sector represents about 31% of Quebec's GDP. In 2011, Quebec experienced faster growth of its research-and-development (R&D) spending than other Canadian provinces. Quebec's spending in R&D in 2011 was equal to 2.63% of GDP, above the European Union average of 1.8%. The percentage spent on research and technology is the highest in Canada and higher than the averages for the Organisation for Economic Co-operation and Development and the G7 countries. Some of the most important companies from Quebec are: Bombardier, Desjardins, the National Bank of Canada, the Jean Coutu Group, Transcontinental média, Quebecor, the Métro Inc. food retailers, Hydro-Québec, the Société des alcools du Québec, the Bank of Montreal, Saputo, the Cirque du Soleil, the Caisse de dépôt et placement du Québec, the Normandin restaurants, and Vidéotron. Thanks to the World Trade Organization (WTO) and the North American Free Trade Agreement (NAFTA), Quebec had, as of 2009[update], experienced an increase in its exports and in its ability to compete on the international market. International exchanges contribute to the strength of the Quebec economy. NAFTA is especially advantageous as it gives Quebec, among other things, access to a market of 130 million consumers within a radius of 1,000 kilometres. In 2008, Quebec's exports to other provinces in Canada and abroad totalled 157.3 billion CND$, or 51.8% of Quebec's gross domestic product (GDP). Of this total, 60.4% were international exports, and 39.6% were interprovincial exports. The breakdown by destination of international merchandise exports is: United States (72.2%), Europe (14.4%), Asia (5.1%), Middle East (2.7%), Central America (2.3%), South America (1.9%), Africa (0.8%) and Oceania (0.7%). In 2008, Quebec imported $178 billion worth of goods and services, or 58.6% of its GDP. Of this total, 62.9% of goods were imported from international markets, while 37.1% of goods were interprovincial imports. The breakdown by origin of international merchandise imports is as follows: United States (31.1%), Europe (28.7%), Asia (17.1%), Africa (11.7%), South America (4.5%), Central America (3.7%), Middle East (1.3%) and Oceania (0.7%). Quebec produces most of Canada's hydroelectricity and is the second biggest hydroelectricity producer in the world (2019). Because of this, Quebec has been described as a potential clean energy superpower. In 2019, Quebec's electricity production amounted to 214 terawatt-hours (TWh), 95% of which comes from hydroelectric power stations, and 4.7% of which come from wind energy. The public company Hydro-Québec occupies a dominant position in the production, transmission and distribution of electricity in Quebec. Hydro-Québec operates 63 hydroelectric power stations and 28 large reservoirs. Because of the remoteness of Hydro-Québec's TransÉnergie division, it operates the largest electricity transmission network in North America. Quebec stands out for its use of renewable energy. In 2008, electricity ranked as the main form of energy used in Quebec (41.6%), followed by oil (38.2%) and natural gas (10.7%). In 2017, 47% of all energy came from renewable sources. The Quebec government's energy policy seeks to build, by 2030, a low carbon economy. Quebec ranks among the top ten areas to do business in mining in the world. In 2011, the mining industry accounted for 6.3% of Quebec's GDP and it employed about 50,000 people in 158 companies. It has around 30 mines, 158 exploration companies and 15 primary processing industries. While many metallic and industrial minerals are exploited, the main ones are gold, iron, copper and zinc. Others include: titanium, asbestos, silver, magnesium and nickel, among many others. Quebec is also as a major source of diamonds. Since 2002, Quebec has seen an increase in its mineral explorations. In 2003, the value of mineral exploitation reached $3.7 billion. The agri-food industry plays an important role in the economy of Quebec, with meat and dairy products being the two main sectors. It accounts for 8% of the Quebec's GDP and generate $19.2 billion. In 2010, this industry generated 487,000 jobs in agriculture, fisheries, manufacturing of food, beverages and tobacco and food distribution. In 2021, Quebec's aerospace industry employed 35,000 people and its sales totalled C$15.2 billion – the world's 6th largest. Many aerospace companies are active here, including CMC Electronics, Bombardier, Pratt & Whitney Canada, Héroux-Devtek, Rolls-Royce, General Electric, Bell Textron, L3Harris, Safran, SONACA, CAE Inc., and Airbus, among others. Montreal is globally considered one of the aerospace industry's great centres, and several international aviation organisations seat here. Both Aéro Montréal and the CRIAQ were created to assist aerospace companies. The pulp and paper industry accounted for 3.1% of Quebec's GDP in 2007 and generated annual shipments valued at more than $14 billion. This industry employs 68,000 people in several regions of Quebec. It is also the main -and in some circumstances only- source of manufacturing activity in more than 250 municipalities in the province. The forest industry has slowed in recent years because of the softwood lumber dispute. In 2020, this industry represented 8% of Quebec's exports. As Quebec has few significant deposits of fossil fuels, all hydrocarbons are imported. Refiners' sourcing strategies have varied over time and have depended on market conditions. In the 1990s, Quebec purchased much of its oil from the North Sea. Since 2015, it now consumes almost exclusively the crude produced in western Canada and the United States. Quebec's two active refineries have a total capacity of 402,000 barrels per day, greater than local needs which stood at 365,000 barrels per day in 2018. Thanks to hydroelectricity, Quebec is the world's fourth largest aluminum producer and creates 90% of Canadian aluminum. Three companies make aluminum here: Rio Tinto, Alcoa and Aluminium Alouette. Their 9 alumineries produce 2,9 million tons of aluminum annually and employ 30,000 workers. The finance and insurance sector employs more than 168,000 people. Of this number, 78,000 are employed by the banking sector, 53,000 by the insurance sector and 20,000 by the securities and investment sector. The Bank of Montreal, founded in 1817 in Montreal, was Quebec's first bank but, like many other large banks, its central branch is now in Toronto. Several banks remain based in Quebec National Bank of Canada, the Desjardins Group and the Laurentian Bank. The tourism industry is a major sector in Quebec. The Ministry of Tourism ensures the development of this industry under the commercial name "Bonjour Québec". Quebec is the second most important province for tourism in Canada, receiving 21.5% of tourists' spending (2021). The industry provides employment to over 400,000 people. These employees work in the more than 29,000 tourism-related businesses in Quebec, most of which are restaurants or hotels. 70% of tourism-related businesses are located in or close to Montreal or Quebec City. It is estimated that, in 2010, Quebec welcomed 25.8 million tourists. Of these, 76.1% came from Quebec, 12.2% from the rest of Canada, 7.7% from the United States and 4.1% from other countries. Annually, tourists spend more than $6.7 billion in Quebec's tourism industry. Approximately 1.1 million Quebecers work in the field of science and technology. In 2007, the Government of Quebec launched the Stratégie québécoise de la recherche et de l'innovation (SQRI) aiming to promote development through research, science and technology. The government hoped to create a strong culture of innovation in Quebec for the next decades and to create a sustainable economy. Quebec's IT sector has 7,600 businesses and employs 140,000 people. Its most developed sectors are telecommunications, multimedia and video game software, computer services, microelectronics, and the components sector. There are currently 115 telecommunications companies established in the province, including Motorola, Ericsson and Mitec. The multimedia and video game sector has been growing fast since the early 2000s. The Digital Alliance, which claims 191 active members in video games, online education, mobility and Internet services, estimates the annual revenue of the sector at $827 million in 2014. The microelectronics sector is made up of more than 100 companies employing 13,000 people. Computer services, software development, and consulting engineering employ 60,000 skilled workers. While the largest IT employers are CMC Electronics, IBM, and Matrox, many other tech companies are present here, including Ubisoft, Electronic Arts, Microids, Strategy First, Eidos, Activision, A2M, Frima Studio, etc. Montreal is ranked fourth in North America for the number of jobs in the pharmaceutical sector. Education The education system of Quebec, administered by the government of Quebec's Ministry of Education and Higher Education, differs from those of other Canadian provinces. The province has five levels of education: first preschool, then primary school, then secondary school [fr]; then CEGEP (see College education in Quebec); and finally university or college. Attached to these levels are the options to also attend professional development opportunities, classes for adults, and continuing education. For every level of teaching, there exists a public network and private network: the public network is financed by taxes while the private options must be paid for by the student. In 2020, school boards were replaced by school service centres. All universities in Quebec exist by virtue of laws adopted by the National Assembly of Quebec in 1967 during the Quiet Revolution. Their financing mostly comes from public taxes, but the laws under which they operate grants them more autonomy than other levels of education. Quebec is considered one of world leaders in fundamental scientific research, having produced ten Nobel laureates in either physics, chemistry, or medicine. It is also considered one of the world leaders in sectors such as aerospace, information technology, biotechnology and pharmaceuticals, and therefore plays a significant role in the world's scientific and technological communities. Between 2000 and 2011, Quebec had over 9,469 scientific publications in biomedical research and engineering. The contribution of Quebec in science and technology represented approximately 1% of the research worldwide between the 1980s and 2009. The province is one of the world leaders in the field of space science and contributed to important discoveries in this field. The Canadian Space Agency was established in Quebec due to its major role in this research field. A total of four Quebecers have been in space since the creation of the CSA: Marc Garneau, Julie Payette, and David Saint-Jacques as CSA astronauts, plus Guy Laliberté as a private citizen who paid for his trip. Quebec has also contributed to the creation of some Canadian artificial satellites including SCISAT-1, ISIS, Radarsat-1 and Radarsat-2. Quebec ranks among the world leaders in the field of life science. Quebec has more than 450 biotechnology and pharmaceutical companies which together employ more than 25,000 people and 10,000 highly qualified researchers. Infrastructure Development and security of land transportation in Canada are provided by Transports Québec. Other organizations, such as the Canadian Coast Guard and Nav Canada, provide the same service for the sea and air transportation. The Commission des transports du Québec works with the freight carriers and the public transport. The réseau routier québécois (Quebec road network) is managed by the Société de l'assurance automobile du Québec (SAAQ; Quebec Automobile Insurance Corporation) and consists of about 185,000 km (115,000 mi) of highways and national, regional, local, collector and forest roads. In addition, Quebec has almost 12,000 bridges, tunnels, retaining walls, culverts and other structures such as the Quebec Bridge, the Laviolette Bridge and the Louis-Hippolyte Lafontaine Bridge–Tunnel. In the waters of the Saint Lawrence there are eight deep-water ports for the transhipment of goods. In 2003, 3886 cargo and 9.7 million tonnes of goods transited the Quebec portion of the Saint Lawrence Seaway. Concerning rail transport, Quebec has 6,678 km (4,150 mi) of railways integrated in the large North American network. Although primarily intended for the transport of goods through companies such as the Canadian National (CN) and the Canadian Pacific (CP), the Quebec railway network is also used by inter-city passengers via Via Rail Canada and Amtrak. In April 2012, plans were unveiled for the construction of an 800 km (497 mi) railway running north from Sept-Îles, to support mining and other resource extraction in the Labrador Trough. Quebec's air network includes 43 airports that offer scheduled services on a daily basis. In addition, the Government of Quebec owns airports and heliports to increase the accessibility of local services to communities in the Basse-Côte-Nord and northern regions. Various other transport networks crisscross the province of Quebec, including hiking trails, snowmobile trails and bike paths. The Green Road is the largest at nearly 4,000 km (2,500 mi) in length. Quebec has a health policy that emphasizes prevention, is based on the analysis of health-related data, and evolves with the needs of the population. Similar to other developed economies, the public health policies implemented in Quebec have extended the life expectancy of its population since the mid-20th century. Health and social services are part of the same administration. The Quebec health system is also public, which means that the government acts as the main insurer and administrator, that funding is provided by general taxation, and that patients have access to care regardless of their income level. There are 34 health establishments in Quebec, 22 of which are an Integrated Health and Social Services Centre [fr] (CISSS). They ensure the distribution of different services on the territories they are assigned to. Quebec has approximately 140 hospitals for general or specialised care (CHSGS). Quebec also has other types of establishments in its healthcare system, such as Centre local de services communautaires (CLSC), Centre d'hébergement et de soins de longue durée (CHSLD), Centre de réadaptation and Centre de protection de l'enfance et de la jeunesse. Finally, there are private healthcare establishments (paid for directly by the patient) like Groupe de médecine de famille [fr], pharmacies, private clinics, dentists, community organisations and retirement homes. A 2021 Ipsos poll found that 85% of Quebecers agree that their health care system is too bureaucratic to respond to the needs of the population and in 2023 found that less than half of Quebecers are satisfied with the provincial health care system. In 2021, 59.9% of Quebec's residents were property owners. In 2019, among property owners, 34% were couples with kids, 33% were couples without children, 22% lived alone, 8% were single parents, and 3% were something else. Among renters, 16% were couples with kids, 13% were couples without children, 51% lived alone, 13% were single parents, and 7% were something else. Since the 1980s, the average price of a single-family home has doubled every 10 years, going from $48,715 in 1980 to $424,844 in 2021. Since the average salary did not follow these increases, Quebec homes are 10 times more expensive than they were 40 years ago. In 2022, the cities with the most severe housing shortages were Granby, with a vacancy rate of 0,1%, followed by Marieville (0,1%), Rimouski (0,2%), Drummondville (0,2%) and Rouyn-Noranda (0,3%). Culture Quebec has developed its own unique culture from its historic New France roots. Its culture also symbolizes a distinct perspective: being a French-speaking nation surrounded by a bigger English-speaking culture. The Quartier Latin (English: Latin Quarter) of Montreal, and Vieux-Québec (English: Old Quebec) in Quebec City are two hubs of metropolitan cultural activity. Life in the cafés and "terrasses" (outdoor restaurant terraces) reveals a Latin influence in Quebec's culture, with the théâtre Saint-Denis in Montreal and the Capitole de Québec theatre in Quebec City being among the principal attractions. A number of governmental and non-government organizations support cultural activity in Quebec. The Conseil des arts et des lettres du Québec (CALQ) is an initiative of the Ministry of Culture and Communications (Quebec). It supports creation, innovation, production, and international exhibits for all cultural fields of Quebec. The Société de développement des entreprises culturelles (SODEC) works to promote and fund individuals working in the cultural industry. The Prix du Québec is an award given by the government to confer the highest distinction and honour to individuals demonstrating exceptional achievement in their respective cultural field. Other awards include the Athanase David Awards (Literature), Félix Awards (Music), Gémeaux Awards (Television and film), Jutra Awards (Cinema), Masques Awards (Theatre), Olivier Guimond Awards (Humour) and the Opus Awards (Concert music). Traditional music is imbued with many dances, such as the jig, the quadrille, the reel and line dancing. Traditional instruments include harmonica, fiddle, spoons, jaw harp and accordion. The First Nations and the Inuit of Quebec also have their own traditional music. Quebec's most popular artists of the last century include the singers Félix Leclerc, Gilles Vigneault, Kate and Anna McGarrigle and Céline Dion. The Association québécoise de l'industrie du disque, du spectacle et de la vidéo (ADISQ) was created in 1978 to promote the music industry in Quebec. The Orchestre symphonique de Québec and the Montreal Symphony Orchestra are respectively associated with the Opéra de Québec and the Opéra de Montreal whose performances are presented at the Grand Théâtre de Québec and at Place des Arts. The Ballets Jazz de Montreal, the Grands Ballets and La La La Human Steps are three important professional troupes of contemporary dance. Among the theatre troupes are the Compagnie Jean-Duceppe, the Théâtre La Rubrique, and the Théâtre Le Grenier. In addition to the network of cultural centres in Quebec, the venues include the Monument-National and the Rideau Vert (green curtain) Theatre in Montreal, and the Trident Theatre in Quebec City. The National Theatre School of Canada and the Conservatoire de musique et d'art dramatique du Québec form the future players. Several circus troupes were created in recent decades, the most important being the Cirque du Soleil. Among these troops are contemporary, travelling and on-horseback circuses, such as Les 7 Doigts de la Main, Cirque Éloize, Cavalia, Kosmogonia, Saka and Cirque Akya. The National Circus School and the École de cirque de Québec were created to train future Contemporary circus artists. Tohu, la Cité des Arts du Cirque was founded in 2004 to disseminate the circus arts. Comedy is a vast cultural sector. Quebec has created and is home to several different comedy festivals, including the Just for Laughs festival in Montreal, as well as the Grand Rire festivals of Quebec, Gatineau and Sherbrooke. The Association des professionnels de l'industrie de l'humour (APIH) is the main organization for the promotion and development of the cultural sector of humour in Quebec and the National School of Humour [fr], created in 1988, trains future humorists in Quebec. The Cinémathèque québécoise has a mandate to promote the film and television heritage of Quebec. The National Film Board of Canada (NFB), a federal Crown corporation, provides for the same mission in Canada. The Association of Film and Television in Quebec (APFTQ) promotes independent production in film and television. While the Association of Producers and Directors of Quebec (APDQ) represents the business of filmmaking and television, the Association of Community Radio Broadcasters of Quebec (ARCQ) (French acronym) represents the independent radio stations. Several movie theatres across Quebec ensure the dissemination of Quebec cinema. With its cinematic installations, such as the Cité du cinéma and Mel's studios, the city of Montreal is home to the filming of various productions. The state corporation Télé-Québec, the federal Crown corporation CBC, general and specialized private channels, networks, independent and community radio stations broadcast the various Quebec téléromans, the national and regional news, and other programming. Les Rendez-vous du cinéma québécois is a festival surrounding the ceremony of the Jutra Awards Night that rewards work and personalities of Quebec cinema. The Artis and the Gemini Awards gala recognize the personalities of television and radio industry in Quebec and French Canada. The Film Festival of the 3 Americas, the Festival of International Short Film, the World Film Festival and the Festival of New Cinema are other annual events surrounding the film industry in Quebec. In the realm of literature and international publishing, the Québec Édition group is a committee created by the National Association of Book Editors dedicated to the international influence of French-language publishings from Quebec and Canada. Quebec's French-speaking populace has the second largest body of folktales in Canada (the first being First Nations). When the early settlers arrived from France in the 17th century, they brought with them popular tales from their homeland, which were adapted to the local context. Many were passed on through generations by raconteurs, or storytellers. Almost all of the stories native to Quebec were influenced by Christian dogma and superstitions. The Devil, for instance, appears often as either a person, an animal or monster, or indirectly through Demonic acts. Various tales and stories are told through oral tradition, such as, among many others, the legends of the Bogeyman, the Chasse-galerie, the Black Horse of Trois-Pistoles, the Complainte de Cadieux, the Corriveau, the dancing devil of Saint-Ambroise, the Giant Beaupré, the monsters of the lakes Pohénégamook and Memphremagog, of Quebec Bridge (called the Devil's Bridge), the Rocher Percé and of Rose Latulipe, for example. From New France, Quebec literature was first developed in the travel accounts of explorers. The Moulin à paroles traces the great texts that have shaped the history of Quebec. The first to write the history of Quebec, since its discovery, was the historian François-Xavier Garneau. Many Quebec poets and prominent authors marked their era and today remain anchored in the collective imagination, like, among others, Philippe Aubert de Gaspé, Octave Crémazie, Honoré Beaugrand, Émile Nelligan, Lionel Groulx, Gabrielle Roy, Hubert Aquin, Michel Tremblay, Marie Laberge, Fred Pellerin and Gaston Miron. The regional novel from Quebec is called Terroir novel and is a literary tradition specific to the province. The art of Quebec has developed around the specific characteristics of its landscapes and cultural, historical, social and political representations. The development of Quebec masterpieces in painting, printmaking and sculpture is marked by the contribution of artists such as Louis-Philippe Hébert, Cornelius Krieghoff, Alfred Laliberté, Marc-Aurèle Fortin, Marc-Aurèle de Foy Suzor-Coté, Jean Paul Lemieux, Clarence Gagnon, Adrien Dufresne, Alfred Pellan, Jean-Philippe Dallaire, Charles Daudelin, Arthur Villeneuve, Jean-Paul Riopelle, Paul-Émile Borduas and Marcelle Ferron. The fine arts of Quebec are displayed at the Quebec National Museum of Fine Arts, the Montreal Museum of Contemporary Art, the Montreal Museum of Fine Arts, the Quebec Salon des métiers d'art and in many art galleries. The Montreal School of Fine Arts forms the painters, printmakers and sculptors of Quebec. Quebec's architecture is characterized by its unique Canadien-style buildings as well as the juxtaposition of a variety of styles reflective of Quebec's history. When walking in any city or town, one can come across buildings with styles congruent to Classical, Neo-Gothic, Roman, Neo-Renaissance, Greek Revival, Neo-Classical, Québécois Neo-Classical, Victorian, Second Empire, Modern, Post-modern or Skyscrapers. Canadien-style houses and barns were developed by the first settlers of New France along the banks of the Saint Lawrence River. These buildings are rectangular one-storey structures with an extremely tall and steep roof, sometimes almost twice as tall as the house below. Canadien-style churches also developed and served as landmarks while traversing rural Quebec. Several sites, houses and historical works reflect the cultural heritage of Quebec, such as the Village Québécois d'Antan, the historical village of Val-Jalbert, the Fort Chambly, the national home of the Patriots, the Chicoutimi pulp mill (Pulperie de Chicoutimi), the Lachine Canal and the Victoria Bridge. As of December 2011, there are 198 National Historic Sites of Canada in Quebec. These sites were designated as being of national historic significance. Various museums tell the cultural history of Quebec, like the Museum of Civilization, the Museum of French America, the McCord Museum or the Montreal Museum of Archaeology and History in Pointe-à-Callière, displaying artifacts, paintings and other remains from the past of Quebec. Notable schools include the Conservatoire de musique et d'art dramatique du Québec, the École nationale de théâtre du Canada and the École nationale de cirque. Notable public agencies to catalogue and further develop Quebec's culture include the Bibliothèque et Archives nationales du Québec, the Conseil des arts et des lettres du Québec and Télé-Québec. The Association Quebecoise des Loisirs Folkloriques is an organization committed to preserving and disseminating Quebec's folklore heritage. The traditional Quebecois cuisine descends from 16th-century French cuisine, the fur trade and a history of hunting. Quebec's cuisine has also been influenced by learning from First Nation, by English cuisine and by American cuisine. Quebec is most famous for its tourtière, pâté chinois, poutine, and St. Catherine's taffy among others. "Le temps des sucres" is a period during springtime when many Quebecers go to the sugar shack (cabane à sucre) for a traditional meal. Quebec is the world's biggest maple syrup producer. The province has a long history of producing maple syrup, and creating new maple-derived products. Other major food products include beer, wine (including ice wine and ice cider), and cheese. Sports in Quebec constitutes an essential dimension of Quebec culture. Ice hockey remains the national sport. This sport was played for the first time on March 3, 1875, in Montreal and has been promoted over the years by numerous achievements, including the centenary of the Montreal Canadiens. Other major sports include Canadian football with the Montreal Alouettes, soccer with Club de Foot Montréal, the Grand Prix du Canada Formula 1 racing with drivers such as Gilles Villeneuve and Jacques Villeneuve, and professional baseball with the former Montreal Expos. Quebec has hosted several major sporting events, including the 1976 Summer Olympics, the Fencing World Championships in 1967, track cycling in 1974, and the Transat Québec-Saint-Malo race created in 1984. Quebec athletes have performed well at the Winter Olympics over recent years. They won 12 of Canada's 29 medals at the most recent Winter Olympics in Pyeongchang (2018); they won 12 of the 27 Canadian medals in Sochi (2014); and 9 of the 26 Canadian medals in Vancouver (2010). St-Jean-Baptiste Day is one of Quebec's biggest holidays. In 1977, the Quebec Parliament declared June 24, the day of La Saint-Jean-Baptiste, to be Quebec's National Holiday. La Saint-Jean-Baptiste, or La St-Jean, honours French Canada's patron saint, John the Baptist. On this day, the song "Gens du pays", by Gilles Vigneault, is often heard. The song À la claire fontaine was the anthem of the New France, Patriots and French Canadian, then replaced by O Canada, but "Gens du pays" is preferred by many Quebecers to be the national anthem of Quebec. National Patriots' Day, a statutory holiday in Quebec, is also a unique public holiday, which honours the patriotes with displays of the patriote flag, music, public speeches, and ceremonies. Le Vieux de '37 ("The Old Man of '37"), an illustration by Henri Julien that depicts a patriot of this rebellion, is sometimes added at the centre of Patriote flags. Moving Day is a tradition where leases terminate on July 1. This creates a social phenomenon where everyone seems to be moving out at the same time. Other distinct holiday traditions include the Réveillon, a giant feast and party which takes place during Christmas Eve and New Year's Eve and goes on until midnight. Traditional dishes like tourtière or cipâte are offered, and rigaudon, spoon or violin may be played. Finally, April Fools' Day is called Poisson d'Avril ("April's Fish") because while pulling pranks is still important, there is another major tradition: sticking fish-shaped paper cutouts to people's backs without them noticing. In 1939, the government of Quebec unilaterally ratified its coat of arms to reflect Quebec's political history: French rule (gold lily on blue background), followed by British rule (lion on red background), followed by Canadian rule (maple leaves). Je me souviens ("I remember") is an official part of the coat of arms and has been the official licence plate motto since 1978, replacing the previous motto: La belle province ("the beautiful province"), still used as a nickname for the province. The fleur-de-lis, one of Quebec's most common symbols, is an ancient symbol of the French monarchy. Finally, the Great Seal of Quebec is used to authenticate documents issued by the government of Quebec. The first members of the Saint-Jean-Baptiste Society created the Carillon Sacré-Coeur flag, which consisted of a white cross on an azure background with white fleur-de-lis in each corner and a Sacred Heart surrounded by maple leaves in the centre; it was based on the French merchant flag flown by Champlain and the Flag of Carillon. The Carillon Sacré-Coeur and French merchant flag went on to be the major inspirations for creating Quebec's current flag in 1903, called the Fleurdelisé. The Fleurdelisé replaced the Union Jack on Quebec's Parliament Building on January 21, 1948. Three new official emblems in were adopted in the late 20th century: the Snowy owl in 1987 to symbolize the whiteness of Quebec's semi-northern climate, the Yellow birch in 1993 for the variety of its uses and by its commercial value, and the Iris versicolor in 1999 to illustrate the cultural diversity of Quebec and the importance of water and wetlands for the balance of nature. Quebec's diaspora The earliest immigrants to the Canadian prairies were French Canadians from Quebec. Many Franco-Albertans, Fransaskois and Franco-Manitobans are descended from them. From the mid-1800s to the Great Depression, Quebec experienced the Grande Hémorragie ("Great Hemorrhaging"), a massive emigration of 900,000 people from Quebec to New England. French Canadians often established themselves in Little Canadas in many industrial New England centres. Of the 900,000 Québécois who emigrated, about half returned. Most of the descendants of those who stayed are now assimilated, though a few Franco-Americans remain, speaking New England French. Some tried to slow the Grande Hémorragie by redirecting people north, which resulted in the founding of many regions in Quebec (ex. Saguenay-Lac-St-Jean, Val-d'Or) but also in Northeastern Ontario. The northeastern Franco-Ontarians of today, who live in Timmins, Hearst, Moosonee and Sault Sainte Marie, among others, are the descendants of emigrants from Quebec who worked in the mines of the area. In recent times, snowbirds often migrate to southern Florida during the winter, resulting in the emergence of temporary "Québécois regions," such as in Hollywood. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Speedcoding] | [TOKENS: 700] |
Contents Speedcoding Speedcoding, Speedcode or SpeedCo was the first high-level programming language[a] created for an IBM computer. The language was developed by John W. Backus in 1953 for the IBM 701 to support computation with floating point numbers. The idea arose from the difficulty of programming the IBM SSEC machine when Backus was hired to calculate astronomical positions in early 1950. The speedcoding system was an interpreter and focused on ease of use at the expense of system resources. It provided pseudo-instructions for common mathematical functions: logarithms, exponentiation, and trigonometric operations. The resident software analyzed pseudo-instructions one by one and called the appropriate subroutine. Speedcoding was also the first implementation of decimal input/output operations. Although it substantially reduced the effort of writing many jobs, the running time of a program that was written with the help of Speedcoding was usually ten to twenty times that of machine code. The interpreter took 310 memory words, about 30% of the memory available on a 701. History and development In August 1952, several dozen IBM engineers and IBM 701 customers met in Poughkeepsie, New York to exchange ideas and best practices on programming the new machines in assembly. Several attendees expressed frustration with the slow nature of assembly programming and debugging, and questioned the utility of the 701 in applications where solutions to problems were needed quickly, or when the value of a solution justified the expense of computation time but not the cost of programming and debugging. Attendees likewise complained with issues with "scaling", or the need to religiously track the decimal point in arithmetic operations. John W. Sheldon, a supervisor of IBM's Technical Computing Bureau attending the meeting, and others felt that an "interpretive" programming system that utilized floating point operations was the best solution to this problem. Sheldon asked John Backus, who had previously worked on a CPC to SSEC code translator, to supervise the creation of a new floating-point interpretive programming language for use internal to IBM. Backus himself had previously expressed interest in improving programming methods, and observed that computing costs were roughly equally split between the cost of computation and cost of programming personnel, and that the additional expense of testing made labor the considerably larger expense. Starting in 1953, Backus and five colleagues designed this new language and named it "Speedcoding", where its use soon spread outside of IBM to customer installations of the 701 system. Syntax and semantics Speedcoding programs are organized as a series of instructions, each of which are stored in memory as a single 72-bit data word. An instruction generally consists of two operations (OP1 and OP2) and 4 memory addresses. The first operation (OP1) is a mathematical or input/output operation that has 3 associated memory addresses, one or more of which can be modified depending on the nature of the operation. Mathematical operations include basic arithmetic, square root, and trigonometry functions. The logical operations include functionality for reading, writing, skipping, and rewinding magnetic tape, as well as operations for interacting with data stored in drum memory. The second operation (OP2) is a logical operation that has the remaining 1 associated memory address. Logical operations allow instructions to be carried out in a different order from which they are written allowing for implementations of gotos, conditionals, loops, and other advanced behavior. Reserved Arithmetic and Input/Output Operation Keywords Reserved Logical Keywords See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Reality_Labs#Acquisition_by_Facebook] | [TOKENS: 5735] |
Contents Reality Labs Reality Labs, formerly Oculus VR, is a business and research unit of Meta Platforms (formerly Facebook Inc.) that produces virtual reality (VR) and augmented reality (AR) hardware and software, including virtual reality headsets such as the Quest, and online platforms such as Horizon Worlds. In June 2022, several artificial intelligence (AI) initiatives that were previously a part of Meta AI were transitioned to Reality Labs. This also includes Meta's fundamental AI Research laboratory FAIR which is now part of the Reality Labs - Research (RLR) division. The Reality Labs unit is the result of the merger of several initiatives under Meta Platforms and the incorporation of several acquired companies. This includes CTRL-Labs founded by Thomas Reardon which develops non-invasive neural interface technology as well as Oculus, a company that was founded in 2012 by Palmer Luckey, Brendan Iribe, Michael Antonov and Nate Mitchell to develop a VR headset for video gaming. History As a head-mounted display (HMD) designer at the University of Southern California Institute for Creative Technologies, Palmer Luckey earned a reputation for having the largest personal collection of HMDs in the world and was a longtime moderator in Meant to be Seen (MTBS)'s discussion forums. Palmer created a series of new technologies that resulted in a VR headset that was both higher performance than what was currently on the market and was also inexpensive for gamers. To develop the new product, Luckey founded Oculus VR with Scaleform co-founders Brendan Iribe and Michael Antonov, Nate Mitchell and Andrew Scott Reisse. Coincidentally, John Carmack of id Software had been doing his research on HMDs and happened upon Palmer's developments. After sampling an early unit, Carmack favored Luckey's prototype, and just before the 2012 Electronic Entertainment Expo (E3), id Software announced that the BFG Edition of Doom 3 would be compatible with head-mounted display units. During the convention, Carmack introduced a duct-taped head-mounted display, based on Palmer's Oculus Rift prototype, which ran Carmack's software. The unit featured a high-speed IMU and a 5.6-inch (14 cm) LCD, visible via dual lenses that were positioned over the eyes to provide a 90 degree horizontal and 110 degree vertical stereoscopic 3D perspective. Carmack later left id Software as he was hired as Oculus VR's chief technology officer. The Oculus Rift prototype was demonstrated at E3 in June 2012. On August 1, 2012, the company announced a Kickstarter campaign to further develop the product. Oculus announced that the "dev kit" version of the Oculus Rift would be given as a reward to backers who pledged $300 or more on Kickstarter, with an expected shipping date set of December 2012 (though they did not actually ship until March 2013). There was also a limited run of 100 unassembled Rift prototype kits for pledges over $275 that would ship a month earlier. Both versions were intended to include Doom 3 BFG Edition, but Rift support in the game was not ready, so to make up for it they included a choice of discount vouchers for either Steam or the Oculus store.[citation needed] Within four hours of the announcement, Oculus secured its intended amount of US$250,000, and in less than 36 hours, the campaign had surpassed $1 million in funding, eventually ending with $2,437,429. On December 12, 2013, Marc Andreessen joined the company's board when his firm, Andreessen Horowitz, led the $75 million Series B venture funding. In total, Oculus VR has raised $91 million with $2.4 million raised via crowdfunding.[citation needed] Although Oculus only released a development prototype of its headset, on March 25, 2014, Mark Zuckerberg announced that Facebook, Inc. would be acquiring Oculus for US$2 billion, pending regulatory approval. The deal included $400 million in cash and 23.1 million common shares of Facebook, valued at $1.6 billion, as well as an additional $300 million assuming Facebook reaches certain milestones. This move was ridiculed by some backers who felt the acquisition was counter to the independent ideology of crowdfunding. Many Kickstarter backers and game industry figures, such as Minecraft developer Markus Persson, criticized the sale of Oculus to Facebook. On March 28, 2014, Michael Abrash joined the company as Chief Scientist. As of January 2015, the Oculus headquarters has been moved from Irvine, California to Menlo Park, where Facebook's headquarters are also located. Oculus has stated that this move is for their employees to be closer to Silicon Valley. In May 2015, Oculus acquired British company Surreal Vision, a company based on 3D scene-mapping reconstruction and augmented reality. News reported that Oculus and Surreal Vision could create "mixed reality" technology in Oculus' products, similar to the upcoming HMD, Microsoft HoloLens. They reported that Oculus, with Surreal's help, will make telepresence possible. On March 28, 2016, the first consumer version of Oculus Rift, Oculus Rift "CV1", was released. In October 2017, Oculus unveiled the standalone mobile headset Oculus Go in partnership with Chinese electronics manufacturer Xiaomi. On December 28, 2016, Facebook acquired Danish eye tracking startup The Eye Tribe. In September 2018, Oculus became a division of a new structural entity within Facebook known as Facebook Technologies, LLC. Facebook announced in August 2018 they had entered negotiations to lease the entire Burlingame Point campus in Burlingame, California, then under construction. The lease was executed in late 2018, and the site, owned by Kylli, a subsidiary of Genzon Investment Group, is expected to be complete by 2020. Oculus was expected to move to Burlingame Point when development is complete. In February 2019, Facebook released Oculus Quest, a high-end standalone headset. In March 2019, Facebook unveiled Oculus Rift S, an updated revision of the original Rift PC headset in partnership with Chinese electronics manufacturer Lenovo, which featured updated hardware and features carried over from the Go and Quest. On August 13, 2019, Nate Mitchell, Oculus co-founder and VP of product announced his departure from the company. On November 13, 2019, John Carmack wrote in a Facebook post that he would step down as CTO of Oculus to focus on developing artificial general intelligence. He stated he would remain involved with the company as a "Consulting CTO". In September 2020, Facebook unveiled Oculus Quest 2, an update to the original Quest with a revised design and updated hardware. Upon the acquisition of Oculus by Facebook, Inc., Luckey "guaranteed" that "you won't need to log into your Facebook account every time you wanna use the Oculus Rift." Under its ownership, Oculus has been promoted as a brand of Facebook rather than an independent entity and has increasingly integrated Facebook platforms into Oculus products. Support for optional Facebook integration was added to Gear VR in March 2016, with a focus on integration with the social network, and integrations with features such as Facebook Video and social games. By 2016, the division began to be largely marketed as Oculus from Facebook. In September 2016, support for optional Facebook integration was added to the Oculus Rift software, automatically populating the friends list with Facebook friends who have also linked their accounts (displaying them to each other under their real names, but still displaying screen names to anyone else). Users have been increasingly encouraged to use Facebook accounts to sign into its services (although standalone accounts not directly linked to the service were still supported). In 2018, Oculus VR became a division of Facebook Technologies, LLC, to create "a single legal entity that can support multiple Facebook technology and hardware products" (such as Facebook Portal). On August 18, 2020, Facebook announced that all "decisions around use, processing, retention, and sharing of [user] data" on its platforms will be delegated to the Facebook social network moving forward. Users became subject to the unified Facebook privacy policy, code of conduct, and community guidelines, and all users will be required to have a Facebook account to access Oculus products and services. Standalone account registration became unavailable in October 2020, all future Oculus hardware (beginning with Quest 2) will only support Facebook accounts, and support for existing standalone Oculus accounts on already-released products will end on January 1, 2023. Facebook stated that this was needed to facilitate "more Facebook powered multiplayer and social experiences" and make it "easier to share across our platforms". Facebook stated that users would still be able to control sharing from Oculus, maintain a separate friends list within the Oculus platform, and hide their real name to others. Users and media criticized Facebook for the move. Ars Technica noted that there is no clear way to opt-out of information tracking and that the collected data will likely be used for targeted advertising. Furthermore, Facebook requires the use of a person's real name. In September 2020, Facebook temporarily suspended sales of all Oculus products in Germany; a German watchdog had presented concerns that this integration requirement violates the General Data Protection Regulation (GDPR), which prohibits making use of a service contingent on consenting to the collection of personally identifiable information, and the requirement that existing users also link to a Facebook account to use Oculus hardware and services. On August 25, 2020, Facebook announced the formation of Facebook Reality Labs, a new unit that would encompass all of Facebook's virtual and augmented reality (AR) hardware and software, including Oculus, Portal, and Facebook Spark AR. The Oculus Connect conference was also renamed Facebook Connect. In June 2021, Facebook announced it would do a test launch of targeted advertisements in applications for Oculus Quest. The company claims that movement data, voice recordings and raw images from the headset will not be used in targeting. Instead, the ads will rely on information from the user's Facebook profile and all user activity related to Oculus, including apps used or installed. The company has not stated whether ads will appear only in applications or in the Oculus Home experience as well. In July 2021, Facebook announced it would be deprecating its proprietary Oculus API and adding full support for OpenXR. On October 25, 2021, during Connect, Facebook announced that it would invest $10 billion over the next year into Reality Labs, and that it would begin to report its revenue separately from the Facebook "Family of Apps"—which includes Facebook, Messenger, Instagram, and WhatsApp. Three days later on October 28, Facebook announced that it would change its corporate name to Meta (legally Meta Platforms, Inc.), as part of the company's long-term focus on metaverses and related technologies. The company also teased a "high-end" mixed reality headset codenamed "Project Cambria". As a result, CTO Andrew Bosworth announced that the Oculus brand would be phased out in 2022; all Facebook hardware products will be marketed under the Meta name, and Oculus Store would be renamed Quest Store. Likewise, immersive social platforms associated with Oculus will be brought under the Horizon brand (such as Horizon Worlds). He also stated that "as we've heard feedback from the VR community more broadly, we're working on new ways to log into Quest that won't require a Facebook account, landing sometime next year. This is one of our highest priority areas of work internally". In January 2022, the Oculus social media accounts were renamed "Meta Quest" in reference to its current VR product line. Concurrently, Meta began to retroactively refer to the Quest 2 as the "Meta Quest 2"—a change that has since been reflected in the packaging and hardware of subsequent units. In July 2022, Meta began to migrate Oculus accounts to the new "Meta account" system, which can be optionally linked with Facebook, Instagram, and WhatsApp accounts. In October 2022, "Project Cambria" was officially unveiled as the Meta Quest Pro. In January 2026, Reality Labs cut 10% of Reality Lab jobs as part of a streamlining of its VR investments to increase quality of software and hardware to make the business "more sustainable". Meta plans to stop sales of commercial SKUs of Meta Quest headsets on February 20, 2026 as part of this quality streamlining effort. As of January 2026, Reality Labs has accumulated a total of $80 billion in total operating losses since late 2020. Products The initial Oculus headsets, produced under the "Oculus Rift" brand, are traditional VR headsets that require a PC to operate. In February 2019, Facebook first released Oculus Quest—a standalone headset which contains integrated mobile computing hardware and does not require a PC to operate, but can optionally be used with Oculus Rift-compatible VR software by connecting it to a PC over USB-C. In 2018, Facebook CEO Mark Zuckerberg stated that the original Oculus Rift "CV1", Oculus Go (a lower-end standalone headset released in 2017), and Quest represented the company's first generation of products, and expected that successors to the three headsets would form its second generation. Oculus began to phase out the original Oculus Rift "CV1" in 2019, in favor of Oculus Rift S — a follow-up to the original model manufactured by Lenovo that incorporates elements of the Go and Quest. In September 2020, the Oculus Quest 2 was unveiled as an updated iteration of the first-generation Quest, and the Rift S was concurrently discontinued—making Quest the division's sole active product line. On September 26, 2018, Facebook unveiled Oculus Quest. It is a standalone headset which is not dependent on a PC for operation; the Quest contains embedded mobile hardware running an operating system based on Android source code, including a Snapdragon 835 system-on-chip, and 64 or 128 GB of internal storage. It contains two OLED displays with a resolution of 1600x1440 per-eye and running at 72 Hz. It supports included Oculus Touch controllers via an "inside-out" motion tracking system known as "Oculus insight", which consists of a series of cameras embedded in the headset. The controllers were redesigned to properly function with Insight. It supports games and applications downloaded via Oculus Store, with ported launch titles such as Beat Saber and Robo Recall. It also supports cross-platform multiplayer and cross-buys between PC and Quest. Facebook stated that they would impose stricter content and quality standards for software distributed for Quest than its other platforms, including requiring developers to undergo a pre-screening of their concepts to demonstrate "quality and probable market success". In June 2019, Facebook announced it sold $5 million worth of content for the Oculus Quest in its first two weeks on sale. In November 2019, Facebook released a beta for a new feature known as Oculus Link, which allows Oculus Rift-compatible software to be streamed from a PC to a Quest headset over USB. In May 2020, Facebook added additional support for the use of USB 2.0 cables, such as the charging cable supplied with the headset. Support for controller-free hand tracking was also launched that month. In September 2020, Facebook unveiled an updated version of the Quest, Oculus Quest 2. It is similar to the original Quest, but with the Snapdragon XR2 system-on-chip and additional RAM, an all plastic exterior, new cloth head straps, updated Oculus Touch controllers with improved ergonomics and battery life, and a 1832x1920 display running at 90 Hz, and up to 120 Hz as an experimental option. Similarly to the Rift S, it uses a single display panel rather than individual panels for each eye. Due to this design, it has more limited inter-pupillary distance options than the original Quest, with the ability to physically move the lenses to adjust for 3 common measurements. The Quest 2's models were both priced US$100 cheaper than their first-generation equivalents at launch, but its prices were increased in July 2022 for economic reasons. In October 2022, Meta unveiled Quest Pro, a mixed reality headset aimed primarily at enterprise and prosumer markets. The headset uses quantum dot displays, with thinner optics using pancake lenses for a more visor-like form factor, and has upgraded color passthrough cameras designed to facilitate mixed reality applications. Its hardware is upgraded from the Quest 2, with the Snapdragon XR2+ system-on-chip, increased RAM, and updated controllers with built-in tracking. These controllers were also made available for the existing Quest 2 as an optional accessory. On June 1, 2023, Meta announced the Quest 3, which was released on October 10, 2023. It features design and hardware features from the Quest Pro, including pancake lenses for a slimmer build, upgraded hardware (including the Snapdragon XR2 Gen 2 system-on-chip) and higher resolution displays, color passthrough cameras for mixed reality, as well as a depth sensor, and updated controllers inspired by the design of the Quest Pro (albeit still using inside-out tracking via infrared sensors, as with its predecessors). Meta positioned the Quest 3 as a high-end model, with the Quest 2 continuing to be sold alongside it. On April 22, 2024, Meta announced that its Android-based system software would be branded as "Horizon OS", and that it would license the platform to third-parties. Meta announced initial hardware partners such as Asus and Lenovo, as well as a partnership with Microsoft for a "limited edition" Xbox-branded Quest bundled with Xbox Wireless Controllers and Game Pass. Meta also stated that it was developing a "spatial app framework" to help port non-VR Android apps to Horizon OS, and that it was open to working with Google to support Play Store on Horizon OS—moves considered a parallel to Apple's support of iOS applications on visionOS. In 2024, leaks by Meta revealed an upcoming Quest model known as the Quest 3S, which is expected to be a low-end variant of the Quest 3 designed to supplant the Quest 2. Quest 3s was unveiled on September 25, 2024, and released on October 15, 2024, as part of the third generation of the Meta Quest line, serving as a cheaper option for new and budget VR players. The Oculus Rift CV1, also known as simply the Oculus Rift, was the first consumer model of the Oculus Rift headset. It was released on March 28, 2016, in 20 countries, at a starting price of US$599. The 6,955 backers who received the Development Kit 1 prototype via the original Oculus Rift Kickstarter campaign were eligible to receive the CV1 model for free. On December 6, 2016, Oculus released motion controller accessories for the headset known as Oculus Touch. In 2014, Samsung partnered with Oculus to develop the Gear VR, a VR headset accessory for Samsung Galaxy smartphones. It relies on the phone's display, which is viewed through lenses inside the headset. At Oculus Connect in September 2015, the Gear VR was announced for a consumer release in November; the initial model supported the Galaxy S6 and Galaxy S7 product lines, as well as the Galaxy Note 5. On October 11, 2017, Oculus unveiled the Oculus Go, a mobile VR headset manufactured by Xiaomi (the device was released in the Chinese market as the Xiaomi Mi VR). Unlike the Oculus Rift, the Go is a standalone headset which is not dependent on a PC for operation. Unlike VR systems such as Cardboard, Daydream, and the Oculus co-developed Samsung Gear VR (where VR software is run on a smartphone inserted into a physical enclosure, and its screen is viewed through lenses), it contains its own dedicated display and mobile computing hardware. The headset includes a 5.5-inch 1440p fast-switching liquid-crystal display (LCD), integrated speakers with spatial audio and a headphone jack for external audio, a Qualcomm Snapdragon 821 system-on-chip, and 32 or 64 GB of internal storage. It runs an Android-based operating system with access to VR software via the Oculus Home user experience and app store, including games and multimedia apps. The Go includes a handheld controller reminiscent of one designed for the Gear VR, which uses relative motion tracking. The Oculus Go does not use positional tracking. While official sales numbers have not been released, according to IDC the Oculus Go and Xiaomi Mi VR had sold nearly a quarter million units combined during the third quarter 2018, and in January 2019 market analysis firm SuperData estimated that over a million Oculus Go units had been sold since the device's launch. In his keynote at 2018's Oculus Connect developer conference, John Carmack revealed that the Go's retention rate was as high as the Rift's, something that nobody at the company had predicted. Carmack also noted that the Go had done especially well in Japan despite lacking internationalization support and the company not specifically catering to the Japanese market. Oculus Go was declared end-of-life in June 2020, with software submissions to end in December 2020, and firmware support ending in 2022. On March 20, 2019, at the Game Developers Conference, Facebook announced the Oculus Rift S, a successor to the original Oculus Rift headset. It was co-developed with and manufactured by Lenovo, and launched at a price of US$399. The Rift S contains hardware features from the Oculus Go and Oculus Quest, including Oculus Insight, integrated speakers, and a new "halo" strap. The Rift S uses the same 1440p LCD and lenses as the Oculus Go (a higher resolution in comparison to the original model, but lower in comparison to Oculus Quest), running at 80 Hz, and is backwards compatible with all existing Oculus Rift games and software. Unlike the original Oculus Rift, it does not have hardware control for inter-pupillary distance. In September 2020, Facebook announced it would be discontinuing the Oculus Rift S and in April 2021, shipments of the headset ceased. 2064x2208 1832 x 1920 In September 2021, Reality Labs and Ray-Ban announced Ray-Ban Stories, a collaboration on camera-equipped smart glasses that can upload video to Facebook. In the following years, additional AI glasses have been released through Reality Labs' partnership with EssilorLuxotica including a premium version featuring a small heads-up display. Divisions Oculus Studios is a division of Meta that serves as an umbrella organization for its first-party game development studios such as Beat Games, Within and Camouflaj. Initially the division was more broadly focused on funding, publishing and giving technical advice to third & second party studios to create games and experiences for Oculus Rift. Meta pledged to invest more than US$500 million on Oculus Studios to make games and content. This period saw them build multi-game relationships with prominent studio partners in a second-party capacity, studios such as Insomniac Games, Twisted Pixel Games, Turtle Rock Studios, and Gunfire Games. As focus moved away from the Rift and towards the very successful Meta Quest 2, the priority shifted to acquiring developers as first-party studios, so they could make exclusive games inhouse instead. Starting in 2020, Meta purchased both Beat Games (Beat Saber) and Sanzaru Games (Asgard's Wrath) and integrated them into Oculus Studios. Ready at Dawn, a game studio composed of former members of Naughty Dog and Blizzard Entertainment (and had also developed the Oculus Rift exclusive Lone Echo) were acquired in June 2020. In 2021, Meta began a deliberate effort of buying up studios that had made strong sales on their Quest 2 platform. In April 2021, Downpour Interactive, the developer of the virtual reality FPS multiplayer game, Onward, were purchased. The team would migrate over to Oculus Studios, although the game would continue to receive updates on all supported VR platforms. In May 2021, Meta bought BigBox VR, the developers of the popular battle royale, Population One. In June 2021, Meta purchased Unit2 Games, the makers of Crayta, a free-to-play platform that allows players to create and share their games via Facebook Gaming. Finally in November 2021, Meta purchased the formerly Microsoft owned studio, Twisted Pixel Games. The developer had been a successful second-party studio for Meta since 2017, and had produced the VR games Wilson's Heart, B-Team, Defector, and Path of the Warrior, all exclusively for Oculus platforms. Additionally, in October 2021, Meta announced they were purchasing Within, the studio behind the successful VR fitness app, Supernatural. It was stated they would continue to operate independently as part of Reality Labs. Later that year the FTC conducted a probe into the 400 million dollar deal. In July 2022, the FTC attempted to sue Meta, as it was felt with the purchase of the studios behind both Beat Saber and Supernatural, they would unfairly corner the VR fitness market. This legal action has blocked the purchase indefinitely. In February 2023 the FTC lawsuit was denied and the purchase of Within went ahead for Meta. At the Meta Connect 2022 event in October, Meta announced that they had acquired Armature Studio and Camouflaj as new members of Oculus Studios. Armature had created the highly popular Quest 2 VR port of Resident Evil 4. Camouflaj were best known for making Republique, and the PSVR exclusive Iron Man VR for Sony - this deal would see them port the latter game to the Quest 2 platform. On January 13, 2026, it was announced that Sanzaru Games, Twisted Pixel, and Armature Studio would be closed in Meta's effort to streamline and improve quality of Horizon OS and future VR HMDs, and help fund AI research and AR wearables. In 2023, Meta announced the formation of a new division called Oculus Publishing aimed at third-party content funding, development support, and marketing. According to Meta, Oculus Publishing has been involved in the publishing of 300 titles including Among Us VR, Bonelab, The Walking Dead: Saints & Sinners, and Blade & Sorcery: Nomad. Oculus Story Studio was an original animated virtual-reality film studio that existed between 2014 and May 2017, which launched three films. The studio aimed to pioneer animated virtual reality filmmaking and educate, inspire, and foster community for filmmakers interested in VR. Oculus Story Studio was first launched publicly at the 2015 Sundance Film Festival, where it presented three VR films - Dear Angelica, Henry, and Lost. Despite generally positive reception and critical acclaim, the studio did not publish any other works and was closed in May 2017. Litigation Following Facebook's acquisition of Oculus VR, ZeniMax Media, the parent company of id Software and John Carmack's previous employer, sought legal action against Oculus, accusing the company of theft of intellectual property relating to the Oculus Rift due to Carmack's transition from id Software to Oculus. The case, ZeniMax v. Oculus, was heard in a jury trial in the United States District Court for the Northern District of Texas, and their verdict was reached in February 2017, finding that Carmack had taken code from ZeniMax and used it in developing the Oculus Rift's software, violating his non-disclosure agreement with ZeniMax, and Oculus' use of the code was considered copyright infringement. ZeniMax was awarded $500 million in the jury verdict, later reduced to $250 million by the presiding judge, and the case was resolved in December 2018 through a confidential settlement agreement. In May 2022, Immersion Corporation sued Meta Platforms for patent infringement relating to the use of vibration functions in their gaming controllers. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/New_England] | [TOKENS: 13319] |
Contents New England New England is a region consisting of six states in the Northeastern United States: Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont. It is bordered by the state of New York to the west and by the Canadian provinces of New Brunswick to the northeast and Quebec to the north. The Gulf of Maine and Atlantic Ocean are to the east and southeast, and Long Island Sound is to the southwest. Boston is New England's largest city and the capital of Massachusetts. Greater Boston, comprising the Boston–Worcester–Providence Combined Statistical Area, houses more than half of the region's total population. The Greater Boston area includes Worcester, Massachusetts, the second-largest city in New England; Manchester, New Hampshire, the largest city in New Hampshire; and Providence, Rhode Island, the capital of and largest city in Rhode Island. In 1620, the Pilgrims established Plymouth Colony, the second successful settlement in British America after the Jamestown Settlement in Virginia, founded in 1607. Ten years later, Puritans established Massachusetts Bay Colony north of Plymouth Colony. Over the next 126 years, people in the region fought in four French and Indian Wars until the English colonists and their Iroquois allies defeated the French and their Algonquian allies. In the late 18th century, political leaders from the New England colonies initiated resistance to Britain's taxes without the consent of the colonists. Residents of Rhode Island captured and burned a British Royal Navy ship which was enforcing unpopular trade restrictions, and residents of Boston threw British tea into the harbor. Britain responded with a series of punitive laws stripping Massachusetts of self-government which the colonists called the "Intolerable Acts". These confrontations led to the first battles of the American Revolutionary War in 1775 and the expulsion of the British authorities from the region in spring 1776. The region played a prominent role in the movement to abolish slavery in the United States, and it was the first region of the U.S. transformed by the Industrial Revolution, initially centered on the Blackstone and Merrimack river valleys. The physical geography of New England is diverse. Southeastern New England is covered by a narrow coastal plain, while the western and northern regions are dominated by the rolling hills and worn-down peaks of the northern end of the Appalachian Mountains. The Atlantic fall line lies close to the coast, which enabled numerous cities to take advantage of water power along the many rivers, such as the Connecticut River, which bisects the region from north to south. Each state is generally subdivided into small municipalities known as towns, many of which are governed by town meetings. Unincorporated areas exist only in portions of Maine, New Hampshire, and Vermont, and village-style governments common in other areas are limited to Vermont and Connecticut. New England is one of the U.S. Census Bureau's nine regional divisions and the only multi-state region with clear and consistent boundaries. It maintains a strong sense of cultural identity, although the terms of this identity are often contrasted, combining Puritanism with liberalism, agrarian life with industry, and isolation with immigration. History Humans reached the current-day New England region by at least 10,500 years ago and likely earlier, occupying a recently de-glaciated environment. Pre-contact Native American groups in New England did not have market economies and physical artifacts tended to change very slowly. However, technological shifts brought agriculture and ceramics to the region prior to the arrival of European settlers in the 17th century. The communities inhabiting the territory in the period of European colonization spoke a variety of the Eastern Algonquian languages. Prominent tribes included the Abenakis, Mi'kmaq, Penobscot, Pequots, Mohegans, Narragansetts, Nipmucs, Pocumtucks, and Wampanoags. Prior to the arrival of European colonists, the Western Abenakis inhabited what is now New Hampshire, New York, and Vermont, as well as parts of Quebec and western Maine. Their principal town was Norridgewock in today's Maine. The Penobscots lived along the Penobscot River in Maine. The Narragansetts and smaller tribes under their sovereignty lived in Rhode Island, west of Narragansett Bay, including Block Island. The Wampanoags occupied southeastern Massachusetts, Rhode Island, and the islands of Martha's Vineyard and Nantucket. The Pocumtucks lived in Western Massachusetts, and the Mohegan and Pequot tribes lived in Connecticut. The Connecticut River Valley linked numerous tribes culturally, linguistically, and politically. As early as 1600 CE, French, Dutch, and English traders began exploring the New World, trading metal, glass, and cloth for local beaver pelts. On April 10, 1606, King James I of England issued a charter for the Virginia Company, which consisted of the London Company and the Plymouth Company. These two privately funded ventures were intended to claim land for England, to conduct trade, and to return a profit. In 1620, the Pilgrims arrived on the Mayflower and established Plymouth Colony in Massachusetts, beginning the history of permanent European colonization in New England. In 1616, English explorer John Smith named the region "New England". The name was officially sanctioned on November 3, 1620, when the charter of the Virginia Company of Plymouth was replaced by a royal charter for the Plymouth Council for New England, a joint-stock company established to colonize and govern the region. The Pilgrims wrote and signed the Mayflower Compact before leaving the ship, and it became their first governing document. The Massachusetts Bay Colony came to dominate the area and was established by royal charter in 1629 with its major town and port of Boston established in 1630. Massachusetts Puritans started to establish themselves in Connecticut as early as 1633. Roger Williams was banished from Massachusetts for theological reasons; he led a group south where they founded Providence Plantations, which grew into the Colony of Rhode Island and Providence Plantations in 1636. At this time, Vermont was uncolonized, and the territories of New Hampshire and Maine were claimed and governed by Massachusetts. As the region grew, it received many immigrants from Europe due to its religious tolerance and economy. Relationships alternated between peace and armed skirmishes between colonists and local Native American tribes, the bloodiest of which was the Pequot War in 1637 which resulted in the Mystic massacre. On May 19, 1643, the colonies of Massachusetts Bay, Plymouth, New Haven, and Connecticut joined in a loose compact called the New England Confederation (officially "The United Colonies of New England"). The confederation was designed largely to coordinate mutual defense, and it gained some importance during King Philip's War which pitted the colonists and their Indian allies against a widespread Indian uprising from June 1675 through April 1678, resulting in killings and massacres on both sides. In the aftermath of settler-Native conflicts, hundreds of captive Indians were sold into slavery. Up until 1700, Native Americans comprised a majority of the non-white labor force in colonial New England. During the next 74 years, there were six colonial wars that took place primarily between New England and New France, during which New England was allied with the Iroquois Confederacy and New France was allied with the Wabanaki Confederacy. Mainland Nova Scotia came under the control of New England after the Siege of Port Royal (1710), but both New Brunswick and most of Maine remained contested territory between New England and New France. The British eventually defeated the French in 1763, opening the Connecticut River Valley for British settlement into western New Hampshire and Vermont. The New England Colonies were settled primarily by farmers who became relatively self-sufficient. Later, New England's economy began to focus on crafts and trade, aided by the Puritan work ethic, in contrast to the Southern colonies which focused on agricultural production while importing finished goods from England. By 1686, King James II had become concerned about the increasingly independent ways of the colonies, including their self-governing charters, their open flouting of the Navigation Acts, and their growing military power. He therefore established the Dominion of New England, an administrative union including all of the New England colonies. In 1688, the former Dutch colonies of New York, East New Jersey, and West New Jersey were added to the dominion. The union was imposed from the outside and contrary to the rooted democratic tradition of the colonies, and it was highly unpopular among the colonists. The dominion significantly modified the charters of the colonies, including the appointment of royal governors to nearly all of them. There was an uneasy tension among the royal governors, their officers, and the elected governing bodies of the colonies. The governors wanted unlimited authority, and the different layers of locally elected officials would often resist them. In most cases, the local town governments continued operating as self-governing bodies, just as they had before the appointment of the governors. After the Glorious Revolution, in 1689, Bostonians overthrew the royal governor, Sir Edmund Andros. During a popular and bloodless uprising, they seized dominion officials and adherents to the Church of England. These tensions eventually culminated in the American Revolution, boiling over with the outbreak of the War of American Independence in 1775. The first battles of which were fought in Lexington and Concord, Massachusetts, leading to the Siege of Boston by continental troops. In March 1776, British forces were compelled to retreat from Boston. After the dissolution of the Dominion of New England, the colonies of New England ceased to function as a unified political unit but remained a defined cultural region. There were often disputes over territorial jurisdiction, leading to land exchanges such as those regarding the Equivalent Lands and New Hampshire Grants. By 1784, all of the states in the region had taken steps towards the abolition of slavery, with Vermont and Massachusetts introducing total abolition in 1777 and 1783, respectively. The nickname "Yankeeland" was sometimes used to denote the New England area, especially among Southerners and the British. Vermont was admitted to statehood in 1791 after settling a dispute with New York. The territory of Maine had been a part of Massachusetts, but it was granted statehood on March 15, 1820, as part of the Missouri Compromise. Today, New England is defined as the six states of Maine, Vermont, New Hampshire, Massachusetts, Rhode Island, and Connecticut. New England's economic growth relied heavily on trade with the British Empire, and the region's merchants and politicians strongly opposed trade restrictions. As the United States and the United Kingdom fought the War of 1812, New England Federalists organized the Hartford Convention in the winter of 1814 to discuss the region's grievances concerning the war, and to propose changes to the United States Constitution to protect the region's interests and maintain its political power. Radical delegates within the convention proposed the region's secession from the United States, but they were outnumbered by moderates who opposed the idea. Politically, the region often disagreed with the rest of the country. Massachusetts and Connecticut were among the last refuges of the Federalist Party, and New England became the strongest bastion of the new Whig Party when the Second Party System began in the 1830s. The Whigs were usually dominant throughout New England, except in the more Democratic Maine and New Hampshire. New England was key to the Industrial Revolution in the United States. The Blackstone Valley running through Massachusetts and Rhode Island has been called the birthplace of America's industrial revolution. In 1787, the first cotton mill in America was founded in the North Shore seaport of Beverly, Massachusetts as the Beverly Cotton Manufactory. The Manufactory was also considered the largest cotton mill of its time. Technological developments and achievements from the Manufactory led to the development of more advanced cotton mills, including Slater Mill in Pawtucket, Rhode Island. Towns such as Lawrence, Massachusetts, Lowell, Massachusetts, Woonsocket, Rhode Island, and Lewiston, Maine became centers of the textile industry following the innovations at Slater Mill and the Beverly Cotton Manufactory. The Connecticut River Valley became a crucible for industrial innovation, particularly the Springfield Armory, pioneering such advances as interchangeable parts and the assembly line which influenced manufacturing processes all around the world. From early in the nineteenth century until the mid-twentieth, the region surrounding Springfield, Massachusetts and Hartford, Connecticut served as the United States' epicenter for advanced manufacturing, drawing skilled workers from all over the world. The rapid growth of textile manufacturing in New England between 1815 and 1860 caused a shortage of workers. Recruiters were hired by mill agents to bring young women and children from the countryside to work in the factories. Between 1830 and 1860, thousands of farm girls moved from rural areas where there was no paid employment to work in the nearby mills, such as the Lowell Mill Girls. As the textile industry grew, immigration also grew. By the 1850s, immigrants began working in the mills, especially French Canadians and Irish. New England as a whole was the most industrialized part of the United States. By 1850, the region accounted for well over a quarter of all manufacturing value in the country and over a third of its industrial workforce. It was also the most literate and most educated region in the country. During the same period, New England and areas settled by New Englanders (upstate New York, Ohio's Western Reserve, and the upper midwestern states of Michigan and Wisconsin) were the center of the strongest abolitionist and anti-slavery movements in the United States, coinciding with the Protestant Great Awakening in the region. Abolitionists who demanded immediate emancipation had their base in the region, such as William Lloyd Garrison, John Greenleaf Whittier, and Wendell Phillips. So too did anti-slavery politicians who wanted to limit the growth of slavery, such as John Quincy Adams, Charles Sumner, and John P. Hale. The anti-slavery Republican Party was formed in the 1850s, and all of New England became strongly Republican, including areas that had previously been strongholds for both the Whig and the Democratic parties. New England remained solidly Republican until Catholics began to mobilize behind the Democrats, especially in 1928. This led to the end of "Yankee Republicanism" and began New England's relatively swift transition into a consistently Democratic stronghold in national elections. The flow of immigrants continued at a steady pace from the 1840s until cut off by World War I. The largest numbers came from Ireland and Britain before 1890, and after that from Quebec, Italy, and Southern Europe. The immigrants filled the ranks of factory workers, craftsmen, and unskilled laborers. The Irish and Italians assumed a larger and larger role in the Democratic Party in the cities and statewide, while the rural areas remained Republican. The Great Depression in the United States of the 1930s hit the region hard, with high unemployment in the industrial cities. The Boston Stock Exchange rivaled the New York Stock Exchange in 1930. The Democrats appealed to factory workers and especially Catholics, pulling them into the New Deal coalition and making the once-Republican region into one that was closely divided. However, the enormous spending on munitions, ships, electronics, and uniforms during World War II caused a burst of prosperity in every sector. The region lost most of its factories starting with the loss of textiles in the 1930s and getting worse after 1960. The New England economy was radically transformed after World War II. The factory economy practically disappeared. Once-bustling New England communities fell into economic decay following the flight of the region's industrial base. The textile mills one by one went out of business from the 1920s to the 1970s. For example, the Crompton Company went bankrupt in 1984 after 178 years in business, costing the jobs of 2,450 workers in five states. The major reasons were cheap imports, the strong dollar, declining exports, and a failure to diversify. The shoe industry subsequently left the region as well. What remains is very high technology manufacturing, such as jet engines, nuclear submarines, pharmaceuticals, robotics, scientific instruments, and medical devices. The Massachusetts Institute of Technology invented the format for university-industry relations in high tech fields and spawned many software and hardware firms, some of which grew rapidly. By the 21st century, the region had become famous for its leadership roles in the fields of education, medicine, medical research, high-technology, finance, and tourism. Some industrial areas were slow in adjusting to the new service economy. In 2000, New England had two of the ten poorest cities in the U.S. (by percentage living below the poverty line): the state capitals of Providence, Rhode Island and Hartford, Connecticut. They were no longer in the bottom ten by 2010; Connecticut, Massachusetts, and New Hampshire remain among the ten wealthiest states in the United States in terms of median household income and per capita income. Geography The states of New England have a combined area, including water surfaces, of 71,988 square miles (186,447 km2), making the region slightly larger than the state of Washington and slightly smaller than Great Britain. Maine alone constitutes nearly one-half of the total area of New England, yet is only the 39th-largest state, slightly smaller than Indiana. The remaining states are among the smallest in the U.S., including the smallest state—Rhode Island. The areas of the states (including water area) are: New England's long rolling hills, mountains, and jagged coastline are glacial landforms resulting from the retreat of ice sheets approximately 18,000 years ago, during the last glacial period. New England is geologically a part of the New England province, an exotic terrane region consisting of the Appalachian Mountains, the New England highlands and the seaboard lowlands. The Appalachian Mountains roughly follow the border between New England and New York. The Berkshires in Massachusetts and Connecticut, and the Green Mountains in Vermont, as well as the Taconic Mountains, form a spine of Precambrian rock. The Appalachians extend northwards into New Hampshire as the White Mountains, and then into Maine and Canada. Mount Washington in New Hampshire is the highest peak in the Northeast, although it is not among the ten highest peaks in the eastern United States. It is the site of the second highest recorded wind speed on Earth, and has the reputation of having the world's most severe weather. The coast of the region, extending from southwestern Connecticut to northeastern Maine, is dotted with lakes, hills, marshes and wetlands, and sandy beaches. Important valleys in the region include the Champlain Valley, the Connecticut River Valley and the Merrimack Valley. The longest river is the Connecticut River, which flows from northeastern New Hampshire for 407 mi (655 km), emptying into Long Island Sound, roughly bisecting the region. Lake Champlain, which forms part of the border between Vermont and New York, is the largest lake in the region, followed by Moosehead Lake in Maine and Lake Winnipesaukee in New Hampshire. The climate of New England varies greatly across its 500 miles (800 km) span from northern Maine to southern Connecticut: Maine, New Hampshire, Vermont, and western Massachusetts have a humid continental climate (Dfb in Köppen climate classification). In this region the winters are long and cold, and heavy snow is common (most locations receive 60–120 inches (150–300 cm) of snow annually in this region). The summer's months are moderately warm, though summer is rather short and rainfall is spread through the year. In central and eastern Massachusetts, northern Rhode Island, and northern Connecticut, the same humid continental climate prevails (Dfa), though summers are warm to hot, winters are shorter, and there is less snowfall (especially in the coastal areas where it is often warmer). Southern and coastal Connecticut is the broad transition zone from the cold continental climates of the north to the milder subtropical climates to the south. The frost free season is greater than 180 days across far southern/coastal Connecticut, coastal Rhode Island, and the islands (Nantucket and Martha's Vineyard). Winters also tend to be much sunnier in southern Connecticut and southern Rhode Island compared to the rest of New England. New England contains forested ecosystems with a variety of terrestrial vertebrates. Land-use patterns and land disturbance, such as the dramatic increase in land clearing for agriculture in the mid eighteenth century to nineteenth century, greatly altered the ecosystem and resulted in extinctions, local extirpations, and recolonizations. According to an analysis of USDA Forest Service data, tree species diversity increases from north to south at about two to three species per degree in latitude. In addition, taller trees are associated with higher tree species diversity, and tree height is a better predictor than general forest age or biomass. Due to an increasing the amount of nitrogen in the soil from climate change, the red maple is becoming one of the most abundant trees in the region, and outcompeting other maples such as the sugar maple. The most populous cities as of the 2020 U.S. Census were (metropolitan areas in parentheses): During the 20th century, urban expansion in regions surrounding New York City has become an important economic influence on neighboring Connecticut, parts of which belong to the New York metropolitan area. The U.S. Census Bureau groups Fairfield, New Haven and Litchfield counties in western Connecticut together with New York City and other parts of New York and New Jersey as a combined statistical area. The following are metropolitan statistical areas as defined by the United States Census Bureau. Demographics In 2020, New England had a population of 15,116,205, a growth of 4.6% from 2010. Massachusetts is the most populous state with 7,029,917 residents, while Vermont is the least populous state with 643,077 residents. Boston is by far the region's most populous city and metropolitan area. Although a great disparity exists between New England's northern and southern portions, the region's average population density is 234.93 inhabitants/sq mi (90.7/km2). New England has a significantly higher population density than that of the U.S. as a whole (79.56/sq mi), or even just the contiguous 48 states (94.48/sq mi). Three-quarters of the population of New England, and most of the major cities, are in southern New England—the states of Connecticut, Massachusetts and Rhode Island—where the combined population density is 786.83/sq mi (2000 census). In northern New England—the states of Maine, New Hampshire, and Vermont—the combined population density is 63.56/sq mi (2000 census). According to the 2006–08 American Community Survey, 48.7% of New Englanders were male and 51.3% were female. Approximately 22.4% of the population were under 18 years of age; 13.5% were over 65 years of age. The six states of New England have the lowest birth rate in the U.S. White Americans make up the majority of New England's population at 73.4% of the total population, Hispanic and Latino Americans are New England's largest minority, and they are the second-largest group in the region behind non-Hispanic European Americans. As of 2014, Hispanics and Latinos of any race made up 10.2% of New England's population. Connecticut had the highest proportion at 13.9%, while Vermont had the lowest at 1.3%. There were nearly 1.5 million Hispanic and Latino individuals reported in New England in 2014. Puerto Ricans were the most numerous of the Hispanic and Latino subgroups. Over 660,000 Puerto Ricans lived in New England in 2014, forming 4.5% of the population. The Dominican population is over 200,000, and the Mexican and Guatemalan populations are each over 100,000. Americans of Cuban descent are scant in number; there were roughly 26,000 Cuban Americans in the region in 2014. People of all other Hispanic and Latino ancestries, including Salvadoran, Colombian and Bolivian, formed 2.5% of New England's population and numbered over 361,000 combined. According to the 2014 American Community Survey, the top ten largest reported European ancestries were the following: Irish: 19.2% (2.8 million); English (includes "American" ancestry): 16.7% (2.4 million); Italian: 13.6% (2.0 million); French and French Canadian: 13.1% (1.9 million); German: 7.4% (1.1 million); Polish: 4.9% (roughly 715,000); Portuguese: 3.2% (467,000); Scottish: 2.5% (370,000); Russian: 1.4% (206,000); and Greek: 1.0% (152,000). English is, by far, the most common language spoken at home. Approximately 81.3% of all residents (11.3 million people) over the age of five spoke only English at home. Roughly 1,085,000 people (7.8% of the population) spoke Spanish at home, and roughly 970,000 people (7.0% of the population) spoke other Indo-European languages at home. Over 403,000 people (2.9% of the population) spoke an Asian or Pacific Island language at home. Slightly fewer (about 1%) spoke French at home, although this figure is above 20% in northern New England, which borders francophone Québec.[citation needed] Roughly 99,000 people (0.7% of the population) spoke languages other than these at home. As of 2014, approximately 87% of New England's inhabitants were born in the U.S., while over 12% were foreign-born. Of foreign-born residents, 35.8% were born in Latin America, 28.6% were born in Asia, 22.9% were born in Europe, and 8.5% were born in Africa. Southern New England forms an integral part of the BosWash megalopolis, a conglomeration of urban centers that spans from Boston to Washington, D.C. The region includes three of the four most densely populated states in the U.S.; only New Jersey has a higher population density than the states of Rhode Island, Massachusetts, and Connecticut. Greater Boston, which includes parts of southern New Hampshire, has a total population of approximately 4.8 million, while over half the population of New England falls inside Boston's Combined Statistical Area of over 8.2 million. Economy Several factors combine to make the New England economy unique. The region is distant from the geographic center of the country, and it is a relatively small region but densely populated. It historically has been an important center of industry and manufacturing and a supplier of natural resource products, such as granite, lobster, and codfish. The service industry is important, including tourism, education, financial and insurance services, and architectural, building and construction services. The U.S. Department of Commerce has called the New England economy a microcosm for the entire U.S. economy. The region underwent a long period of deindustrialization in the first half of the 20th century, as traditional manufacturing companies relocated to the Midwest, with textile and furniture manufacturing migrating to the South. In the late-20th century, an increasing portion of the regional economy included high technology, military defense industry, finance and insurance services, and education and health services. As of 2018, the GDP of New England was $1.1 trillion. New England exports food products ranging from fish to lobster, cranberries, potatoes, and maple syrup. About half of the region's exports consist of industrial and commercial machinery, such as computers and electronic and electrical equipment. Granite is quarried at Barre, Vermont, guns made at Springfield, Massachusetts, Exeter, New Hampshire and Saco, Maine, submarines at Groton, Connecticut, surface naval vessels at Bath, Maine, and hand tools at Turners Falls, Massachusetts. In 2017, Boston was ranked as having the ninth-most competitive financial center in the world and the fourth-most competitive in the United States. Boston-based Fidelity Investments helped popularize the mutual fund in the 1980s and has made Boston one of the top financial centers in the United States. The city is home to the headquarters of Santander Bank and a center for venture capital firms. State Street Corporation specializes in asset management and custody services and is based in the city. Boston is also a printing and publishing center. Houghton Mifflin Harcourt is headquartered there, along with Bedford-St. Martin's and Beacon Press. The city is also home to the Hynes Convention Center in the Back Bay and the Seaport Hotel and Seaport World Trade Center and Boston Convention and Exhibition Center on the South Boston waterfront. The General Electric Corporation announced its decision to move the company's global headquarters to the Boston Seaport District from Fairfield, Connecticut, in 2016, citing factors including Boston's preeminence in the realm of higher education. The city also holds the headquarters to several major athletic and footwear companies, including Converse, New Balance and Reebok. Rockport, Puma and Wolverine World Wide have headquarters or regional offices just outside the city. Hartford is the historic international center of the insurance industry, with companies such as Aetna, Conning & Company, The Hartford, Harvard Pilgrim Health Care, The Phoenix Companies and Hartford Steam Boiler based in the city, and The Travelers Companies and Lincoln National Corporation have major operations in the city. It is also home to the corporate headquarters of U.S. Fire Arms Mfg. Co., United Technologies, and Virtus Investment Partners. Fairfield County, Connecticut, has a large concentration of investment management firms in the area, most notably Bridgewater Associates (one of the world's largest hedge fund companies), Aladdin Capital Management and Point72 Asset Management. Moreover, many international banks have their North American headquarters in Fairfield County, such as NatWest Group and UBS. Agriculture is limited by the area's rocky soil, cool climate, and small area. Some New England states, however, are ranked highly among U.S. states for particular areas of production. Maine is ranked ninth for aquaculture, and has abundant potato fields in its northeast part. Vermont is fifteenth for dairy products, and Connecticut and Massachusetts seventh and eleventh for tobacco, respectively. Cranberries are grown in Massachusetts' Cape Cod-Southcoast-South Shore area, and blueberries in Maine. The region is mostly energy-efficient compared to the U.S. at large, with every state but Maine ranking within the ten most energy-efficient states; every state in New England also ranks within the ten most expensive states for electricity prices. Wind power, mainly from offshore sources, is expected to gain market share in the 2020s. In 2023, three of the six New England states were among the top ten states in the country in terms of taxes paid per taxpayer, while one was among the top five least. The rankings being #3 Maine (11.14%), #4 Vermont (10.28%), #5 Connecticut (9.83%), #11 Rhode Island (9.07%), #20 Massachusetts (8.48%), and #48 New Hampshire (6.14%). While overall tax burden varies widely, all six states sport exceptionally high property taxes with five of the six states being within the nationwide top 10. The rankings being #1 Maine (5.33%), #2 Vermont (4.98%), #3 New Hampshire (4.94%), #6 Connecticut (4.24%), #7 Rhode Island (4.17%), and #13 Massachusetts (3.42%). Government New England town meetings were derived from meetings held by church elders, and are still an integral part of government in many New England towns. At such meetings, any citizen of the town may discuss issues with other members of the community and vote on them. This is the strongest example of direct democracy in the U.S. today, and the strong democratic tradition was even apparent in the early 19th century, when Alexis de Tocqueville wrote in Democracy in America: New England, where education and liberty are the daughters of morality and religion, where society has acquired age and stability enough to enable it to form principles and hold fixed habits, the common people are accustomed to respect intellectual and moral superiority and to submit to it without complaint, although they set at naught all those privileges which wealth and birth have introduced among mankind. In New England, consequently, the democracy makes a more judicious choice than it does elsewhere. By contrast, James Madison wrote in Federalist No. 55 that, regardless of the assembly, "passion never fails to wrest the scepter from reason. Had every Athenian citizen been a Socrates, every Athenian assembly would still have been a mob." The use and effectiveness of town meetings is still discussed by scholars, as well as the possible application of the format to other regions and countries. State and national elected officials in New England recently have been elected mainly from the Democratic Party. The region is generally considered to be the most liberal in the United States, with more New Englanders identifying as liberals than Americans elsewhere. In 2010, four of six of the New England states were polled as the most liberal in the United States. As of 2021, five of the six states of New England have voted for every Democratic presidential nominee since 1992. In that time, New Hampshire has voted for Democratic nominees in every presidential election except 2000, when George W. Bush narrowly won the state. 2020 was a particularly strong year for Democratic nominee Joe Biden in New England, winning 61.2% of the total vote in the six states, the highest percentage for Democrats since the landslide election of 1964. As of the 117th Congress, all members of the U.S. House of Representatives from New England are members of the Democratic Party, and all but one of its senators caucus with the Democrats. Two of those senators, although caucusing with Democrats, are the only independents currently serving in Congress: Bernie Sanders, a self-described democratic socialist, representing Vermont, and Angus King, an Independent representing Maine. In the 2008 presidential election, Barack Obama carried all six New England states by 9 percentage points or more. He carried every county in New England except for Piscataquis County, Maine, which he lost by 4% to Senator John McCain (R-AZ). Pursuant to the reapportionment following the 2010 census, New England collectively has 33 electoral votes. New England The following table presents the vote percentage for the popular-vote winner for each New England state, New England as a whole, and the United States as a whole, in each presidential election from 1900 to 2020, with the vote percentage for the Republican candidate shaded in red and the vote percentage for the Democratic candidate shaded in blue: Judging purely by party registration rather than voting patterns, New England today is one of the most Democratic regions in the U.S. According to Gallup, Connecticut, Massachusetts, Rhode Island, and Vermont are "solidly Democratic", Maine "leans Democratic", and New Hampshire is a swing state. Though New England is today considered a Democratic Party stronghold, much of the region was staunchly Republican before the mid-twentieth century. This changed in the late 20th century, in large part due to demographic shifts and the Republican Party's adoption of socially conservative platforms as part of its strategic shift towards the South. For example, Vermont voted Republican in every presidential election from 1856 through 1988 with the exception of 1964, and has voted Democratic every election since. Maine and Vermont were the only two states in the nation to vote against Democrat Franklin D. Roosevelt all four times he ran for president. Republicans in New England are today considered by both liberals and conservatives to be more moderate (socially liberal) compared to Republicans in other parts of the U.S. Historically, the New Hampshire primary has been the first in a series of nationwide political party primary elections held in the United States every four years. Held in the state of New Hampshire, it usually marks the beginning of the U.S. presidential election process. Even though few delegates are chosen from New Hampshire, the primary has always been pivotal to both New England and American politics. One college in particular, Saint Anselm College, has been home to numerous national presidential debates and visits by candidates to its campus. Education New England contains some of the oldest and most renowned institutions of higher learning in the United States and the world. Harvard College was the first such institution, founded in 1636 at Cambridge, Massachusetts, to train preachers. Yale University was founded in Old Saybrook, Connecticut, in 1701, and awarded the nation's first doctoral (PhD) degree in 1861. Yale moved to New Haven, Connecticut, in 1718, where it has remained to the present day. Brown University was the first college in the nation to accept students of all religious affiliations, and is the seventh oldest U.S. institution of higher learning. It was founded in Providence, Rhode Island, in 1764. Dartmouth College was founded five years later in Hanover, New Hampshire, with the mission of educating the local American Indian population as well as English youth. The University of Vermont, the fifth oldest university in New England, was founded in 1791, the same year that Vermont joined the Union. In addition to four out of eight Ivy League schools, New England contains the Massachusetts Institute of Technology (MIT), the bulk of educational institutions that are identified as the "Little Ivies", four of the original Seven Sisters, one of the eight original Public Ivies, the Colleges of Worcester Consortium in central Massachusetts, and the Five Colleges consortium in western Massachusetts. The University of Maine, the University of New Hampshire, the University of Connecticut, the University of Massachusetts at Amherst, the University of Rhode Island, and the University of Vermont are the flagship state universities in the region. At the pre-college level, New England is home to most of the nation's upscale private schools. The concept of the elite "New England prep school" (preparatory school) and the "preppy" lifestyle is an iconic part of the region's image. New England is home to some of the oldest public schools in the nation and was the first region in the United States to implement universal compulsory schooling. Boston Latin School is the oldest public school in America and was attended by several signatories of the Declaration of Independence. Hartford Public High School is the second oldest operating high school in the U.S. As of 2005, the National Education Association ranked Connecticut as having the highest-paid teachers in the country. Massachusetts and Rhode Island ranked eighth and ninth, respectively. New Hampshire, Rhode Island, and Vermont have cooperated in developing a New England Common Assessment Program test under the No Child Left Behind guidelines. These states can compare the resultant scores with each other. Besides a vigorous newspaper press, there are numerous academic journals and publishing companies in the region, including The New England Journal of Medicine, Harvard University Press and Yale University Press. Some of its institutions lead the open access alternative to conventional academic publication, including MIT, the University of Connecticut, and the University of Maine. The Federal Reserve Bank of Boston publishes the New England Economic Review. Popular culture New England has a shared heritage with England and a culture primarily shaped by waves of immigration. In contrast to other American regions, most of New England's earliest Puritan settlers came from eastern England, contributing to the region's distinctive accents, foods, customs, and social structures.: 30–50 Within modern New England a cultural divide exists between urban New Englanders living along the densely populated coastline, and rural New Englanders in western Massachusetts, northwestern and northeastern Connecticut, Vermont, New Hampshire, and Maine, where population density is low. There is also a substantial divide between Connecticut and the other states of the region, owing to the former's close cultural and economic ties with the New York metropolitan area. Today, New England is the least religious region of the U.S. In 2009, less than half of those polled in Maine, Massachusetts, New Hampshire, and Vermont claimed that religion was an important part of their daily lives. Connecticut and Rhode Island are among the ten least religious states, where 55% and 53% of those polled (respectively) claimed that it was important. According to the American Religious Identification Survey, 34% of Vermonters reported having no religion; nearly one out of every four New Englanders identifies as having no religion, more than in any other part of the U.S. New England had one of the highest percentages of Catholics in the U.S. This number declined from 50% in 1990 to 36% in 2008. Many of the first European colonists of New England had a maritime orientation toward whaling (first noted about 1650) and fishing, in addition to farming. New England has developed a distinct cuisine, dialect, architecture, and government. New England cuisine has a reputation for its emphasis on seafood and dairy; clam chowder, lobster, and other products of the sea are among some of the region's most popular foods. New England has largely preserved its regional character, especially in its historic places. The region has become more ethnically diverse, having seen waves of immigration from Ireland, Quebec, Italy, Portugal, Germany, Poland, Scandinavia, Asia, Latin America, Africa, other parts of the U.S., and elsewhere. The enduring European influence can be seen in the region in the use of traffic rotaries; the bilingual French and English towns of northern Vermont, Maine, and New Hampshire; the unique, often non-rhotic traditional coastal dialect akin to the southeastern half of England; and the region's heavy prevalence of English town- and county-names. These repeat from state to state, primarily due to settlers throughout the region having named their new towns after their old ones. For example, the town of North Yarmouth, Maine, was named by settlers from Yarmouth, Massachusetts, which was in turn named for Great Yarmouth (still locally called Yarmouth) in England. Every New England state has a town named Warren (a French-English noble family of wealthy settlers), and each except Rhode Island has a city/town named Franklin and Washington (constitutional founding fathers), Andover, Bridgewater, Chester, Manchester, Plymouth, and Windsor (these six were towns in England). Every state except Connecticut has a Lincoln and has a Richmond. Massachusetts, Vermont, and Maine each contain a Franklin County. New England maintains a distinct cuisine and food culture. Early foods in the region were influenced by Native American and English cuisines. The early colonists often adapted their original cuisine to fit with the available foods of the region. New England staples reflect the convergence of American Indian and Pilgrim cuisine, such as johnnycakes, succotash, cornbread and various seafood recipes. The Wabanaki tribal nations made nut milk. New England also has a distinct food language. A few of the unique regional terms include "grinders" for submarine sandwiches and "frappes" for thick milkshakes, referred to as "Cabinets" in Rhode Island. Other foods native to the region include steak tips (marinated sirloin steak), bulkie rolls, maple syrup, cranberry recipes and clam chowder. A type of India pale ale known as New England India Pale Ale (NEIPA) was developed in Vermont in the 2010s. Other regional beverages include Moxie, one of the first mass-produced soft drinks in the United States, introduced in Lowell, Massachusetts, in 1876; it remains popular in New England, particularly in Maine. Coffee milk is associated with Rhode Island as the official state drink. Portuguese cuisine is an important element in the annual Feast of the Blessed Sacrament in New Bedford, Massachusetts, the largest ethnic heritage festival in New England. There are several characteristics of spoken American English in the region, most famously the Boston accent, which is native to the northeastern coastal regions of New England. The most identifiable features of the Boston accent originated from England's Received Pronunciation, which shares features such as the broad A and dropping the final R. Another source was 17th century speech in East Anglia and Lincolnshire, where many of the Puritan immigrants had originated. The East Anglian "whine" developed into the Yankee "twang". Boston accents were most strongly associated at one point with the so-called "Eastern Establishment" and Boston's upper class, although today the accent is predominantly associated with blue-collar natives, as exemplified by movies such as Good Will Hunting and The Departed. The Boston accent and those accents closely related to it cover eastern Massachusetts, New Hampshire and Maine. Some Rhode Islanders speak with a non-rhotic accent that many compare to a "Brooklyn" accent or a cross between a New York and Boston accent, where "water" becomes "wata". Many Rhode Islanders distinguish the aw sound [ɔː], as one might hear in New Jersey; e.g., the word "coffee" is pronounced /ˈkɔːfi/ KAW-fee. This type of accent was brought to the region by early settlers from eastern England in the Puritan migration in the mid-seventeenth century.: 13–207 Acadian and Québécois culture are included in music and dance in much of rural New England, particularly Maine. Contra dancing and country square dancing are popular throughout New England, usually backed by live Irish, Acadian or other folk music. Fife and drum corps are common, especially in southern New England and more specifically Connecticut, with music of mostly Celtic, English, and local origin. New England leads the U.S. in ice cream consumption per capita. Candlepin bowling is essentially confined to New England, where it was invented in the 19th century. New England was an important center of American classical music for some time. The First New England School of composers was active between 1770 and 1820, and the Second New England School about a century later. Prominent modernist composers also come from the region, including Charles Ives and John Adams. Boston is the site of the New England Conservatory, Boston Conservatory at Berklee, and the Boston Symphony Orchestra. In popular music, the region has produced Donna Summer, JoJo, New Edition, Bobby Brown, Bel Biv Devoe, Passion Pit, MGMT, Meghan Trainor, New Kids on the Block, Rachel Platten, Clairo, Noah Kahan, Amy Allen and John Mayer. In rock music, the region has produced Rob Zombie, Aerosmith, Extreme, the Modern Lovers, Phish, the Pixies, the Cars, the J. Geils Band, the Mighty Mighty Bosstones, Grace Potter, GG Allin, the Dresden Dolls, Dinosaur Jr., the Dropkick Murphys and Boston. Quincy, Massachusetts, native Dick Dale helped popularize surf rock. Hip hop acts hailing from New England include Gang Starr, Apathy, Mr. Lif and Akrobatik. The leading U.S. cable TV sports broadcaster ESPN is headquartered in Bristol, Connecticut. New England has several regional cable networks, including New England Cable News (NECN) and the New England Sports Network (NESN). New England Cable News is the largest regional 24-hour cable news network in the U.S., broadcasting to more than 3.2 million homes in all of the New England states. Its studios are located in Newton, Massachusetts, outside of Boston, and it maintains bureaus in Manchester, New Hampshire; Hartford, Connecticut; Worcester, Massachusetts; Portland, Maine; and Burlington, Vermont. In Connecticut's Litchfield, Fairfield, and New Haven counties, it also broadcasts New York based news programs—this is due in part to the immense influence New York has on this region's economy and culture, and also to give Connecticut broadcasters the ability to compete with overlapping media coverage from New York-area broadcasters. NESN broadcasts the Boston Red Sox baseball and Boston Bruins hockey throughout the region, save for Fairfield County, Connecticut. Connecticut also receives the YES Network, which broadcasts the games of the New York Yankees and Brooklyn Nets as well as SportsNet New York (SNY), which broadcasts New York Mets games. NBC Sports Boston broadcasts the games of the Boston Celtics, New England Revolution and Boston Cannons to all of New England except Fairfield County. While most New England cities have daily newspapers, The Boston Globe and The New York Times are distributed widely throughout the region. Major newspapers also include The Providence Journal, Portland Press Herald, and Hartford Courant, the oldest continuously published newspaper in the U.S. New Englanders are well represented in American comedy. Writers for The Simpsons and late-night television programs often come by way of The Harvard Lampoon. A number of Saturday Night Live (SNL) cast members have roots in New England, from Adam Sandler to Amy Poehler, who also starred in the NBC television series Parks and Recreation. Seth MacFarlane, the creator of Family Guy, is from Connecticut, with the show taking place in a fictional town called Quahog, Rhode Island. Former Daily Show correspondents John Hodgman, Rob Corddry and Steve Carell are from Massachusetts. Carell was also involved in film and the American adaptation of The Office (alongside fellow Massachusetts natives Mindy Kaling, B. J. Novak, and John Krasinski), which features Dunder-Mifflin branches set in Stamford, Connecticut, and Nashua, New Hampshire. Late-night television hosts Jay Leno and Conan O'Brien have roots in the Boston area. Notable stand-up comedians are also from the region, including Bill Burr, Steve Sweeney, Steven Wright, Sarah Silverman, Lisa Lampanelli, Denis Leary, Lenny Clarke, Patrice O'Neal and Louis CK. SNL cast member Seth Meyers once attributed the region's imprint on American humor to its "sort of wry New England sense of pointing out anyone who's trying to make a big deal of himself", with The Boston Globe suggesting that irony and sarcasm are its trademarks, as well as Irish influences. New Englanders have made significant contributions to literature. The first printing press in America was set up in Cambridge, Massachusetts, by Stephen Daye in the 17th century. Writers in New England produced many works on religious subjects, particularly on Puritan theology and poetry during colonial times and on Enlightenment ideas during the American Revolution. The literature of New England has had an enduring influence on American literature in general, with themes that are emblematic of the larger concerns of American letters, such as religion, race, the individual versus society, social repression and nature. 19th century New England was a center for progressive ideals, and many abolitionist and transcendentalist tracts were produced. Leading transcendentalists were from New England, such as Henry David Thoreau, Ralph Waldo Emerson, and Frederic Henry Hedge. Hartford, Connecticut resident Harriet Beecher Stowe's novel Uncle Tom's Cabin was an influential book in the spread of abolitionist ideas and is said to have "laid the groundwork for the Civil War". Other prominent New England novelists include John Irving, Edgar Allan Poe, Louisa May Alcott, Sarah Orne Jewett, H. P. Lovecraft, Annie Proulx, Stephen King, Jack Kerouac, George V. Higgins, and Nathaniel Hawthorne. Boston was the center of the American publishing industry for some years, largely on the strength of its local writers and before it was overtaken by New York in the middle of the nineteenth century. Boston remains the home of publishers Houghton Mifflin and Pearson Education, and it was the longtime home of literary magazine The Atlantic Monthly. Merriam-Webster is based in Springfield, Massachusetts. Yankee is a magazine for New Englanders based in Dublin, New Hampshire. Many New Englander poets have also been preeminent in American poetry. Prominent poets include Henry Wadsworth Longfellow, David Lindsay-Abaire, Annie Proulx, Edwin Arlington Robinson, Amy Lowell, John Cheever, Emily Dickinson, Elizabeth Bishop, Stanley Kunitz, E. E. Cummings, Edna St. Vincent Millay, Robert P. T. Coffin and Richard Wilbur. Robert Frost who was described as an "artistic institution" frequently wrote about rural New England life. The Confessional poetry movement features prominent New England writers including Robert Lowell, Anne Sexton and Sylvia Plath. New England has a rich history in filmmaking dating back to the dawn of the motion picture era at the turn of the 20th century, sometimes dubbed Hollywood East by film critics. A theater at 547 Washington Street in Boston was the second location to debut a picture projected by the Vitascope, and shortly thereafter several novels were being adapted for the screen and set in New England, including The Scarlet Letter and The House of Seven Gables. The New England region continued to churn out films at a pace above the national average for the duration of the 20th century, including blockbuster hits such as Jaws, Good Will Hunting and The Departed, all of which won Academy Awards. The New England area became known for a number of themes that recurred in films made during this era, including the development of yankee characters, small town life contrasted with city values, seafaring tales, family secrets and haunted New England. These themes are rooted in centuries of New England culture and are complemented by the region's diverse natural landscape and architecture, from the Atlantic Ocean and brilliant fall foliage to church steeples and skyscrapers. Since the turn of the millennium, Boston and the greater New England region have been home to the production of numerous films and television series, thanks in part to tax incentive programs put in place by local governments to attract filmmakers to the region. Notable actors and actresses that have come from the New England area include Ben Affleck, Matt Damon, Chris Evans, Ryan O'Neal, Amy Poehler, Elizabeth Banks, Steve Carell, Ruth Gordon, John Krasinski, Edward Norton, Mark Wahlberg and Matthew Perry. Many films and television series have been produced in and set in New England. There are many museums located throughout New England, especially in the Greater Boston area. These museums include privately held collections as well as public institutions. Most notable of these museums are the Museum of Fine Arts, the Institute of Contemporary Art, Boston, the Isabella Stewart Gardner Museum, Worcester Art Museum, and the Peabody Essex Museum. The oldest public museum in continuous operation in the United States is the Pilgrim Hall Museum in Plymouth, Massachusetts, which opened in 1824. The Boston Public Library is the largest public library in the region with over 8 million materials in its collection. The largest academic research library in the world is the Harvard Library in Cambridge, Massachusetts. The W. E. B. Du Bois Library of the University of Massachusetts Amherst is the tallest academic library in the world. There are also many historical societies in the region. Historic New England operates museums and historic sites in the name of historical preservation. Many properties belonging to HNE include preserved house museums of prominent figures in New England and American history. Other societies include the Massachusetts Historical Society, the Essex Institute, the American Antiquarian Society, and The Bostonian Society. The Massachusetts Historical Society, founded in 1791, is the oldest operating in the United States. Many cities and towns across New England operate their own historical societies focused on historical preservation of local sites and the recording of local history. New England has a strong heritage of athletics, and many internationally popular sports were invented and codified in the region, including basketball, volleyball, and American football. Football is the most popular sport in the region and was developed by Walter Camp in New Haven, Connecticut, in the 1870s and 1880s. The New England Patriots are based in Foxborough, Massachusetts, and are the most popular professional sports team in New England. The Patriots have won six Super Bowl championships and are one of the most winning teams in the National Football League. There are also high-profile collegiate and high school football rivalries in New England. These games are most often played on Thanksgiving Day and are some of the oldest sports rivalries in the United States. The high school rivalry between Wellesley High School and Needham High School in Massachusetts is considered to be the nation's oldest football rivalry, having started in 1882. Before the advent of modern rules of baseball, a different form was played called the Massachusetts Game. This version of baseball was an early rival of the Knickerbocker Rules of New York and was played throughout New England. In 1869, there were 59 teams throughout the region which played according to the Massachusetts rules. The New York rules gradually became more popular throughout the United States, and professional and semi-professional clubs began to appear. Early teams included the Providence Grays, the Worcester Worcesters and the Hartford Dark Blues; these did not last long, but other teams grew to renown, such as the Boston Braves and the Boston Red Sox. Fenway Park was built in 1912 and is the oldest ballpark still in use in Major League Baseball. The Red Sox have won the World Series nine times, tied for third-most among all MLB teams. Other professional baseball teams in the region include the Hartford Yard Goats, New Hampshire Fisher Cats, Vermont Lake Monsters, Portland Sea Dogs, Bridgeport Bluefish, New Britain Bees and the Worcester Red Sox. Basketball was developed in Springfield, Massachusetts, by James Naismith in 1891. Naismith was attempting to create a game which could be played indoors so that athletes could keep fit during New England winters. The Boston Celtics were founded in 1946 and are the most successful NBA team, winning 18 titles. The Celtics' NBA G League team, the Maine Celtics, is based in Portland, Maine. The Women's National Basketball Association's Connecticut Sun is based in Uncasville, Connecticut. The UConn Huskies women's basketball team is the most successful women's collegiate team in the nation, winning 11 NCAA Division I titles, and the UConn Huskies men's basketball team has won six titles, tied for third-most in the nation. The Basketball Hall of Fame is located in Springfield, Massachusetts. Winter sports are extremely popular and have a long history in the region, including alpine skiing, snowboarding, and Nordic skiing. Ice hockey is also a popular sport. The Boston Bruins were founded in 1924 as an Original Six team, and they have a historic rivalry with the Montreal Canadiens. The Bruins play in the TD Garden, a venue that they share with the Boston Celtics. The Boston Fleet of the Professional Women's Hockey League (PWHL) plays at Tsongas Center. College hockey is also a popular spectator sport, with Boston's annual Beanpot tournament between Northeastern University, Boston University, Harvard University and Boston College. Other hockey teams include the Maine Mariners, Providence Bruins, Springfield Thunderbirds, Worcester Railers, Bridgeport Sound Tigers and the Hartford Wolf Pack. The region's largest ice hockey and skating facility is the New England Sports Center in Marlborough, Massachusetts, home to the Skating Club of Boston, one of the oldest ice skating clubs in the United States. Volleyball was invented in Holyoke, Massachusetts, in 1895 by William G. Morgan. Morgan was an instructor at a YMCA and wanted to create an indoor game for his athletes. The game was based on badminton and was spread as a sport through YMCA facilities. The international Volleyball Hall of Fame is located in Holyoke. Rowing, sailing, and yacht racing are also popular events in New England. The Head of the Charles race is held on the Charles River in October every year and attracts over 10,000 athletes and over 200,000 spectators each year. Sailing regattas include the Newport Bermuda Race, the Marblehead to Halifax Ocean Race, and the Single-Handed Trans-Atlantic Race. The New York Times considers the Newport and Marblehead races to be among the most prestigious in the world. The Boston Marathon is run on Patriots' Day every year and was first run in 1897. It is a World Marathon Major and is operated by the Boston Athletic Association. The race route goes from Hopkinton, Massachusetts, through Greater Boston, finishing at Copley Square in Boston. The race offers far less prize money than many other marathons, but its difficulty and long history make it one of the world's most prestigious marathons. It is New England's largest sporting event with nearly 500,000 spectators each year. New England is represented in the top level of American professional soccer by the New England Revolution, an inaugural team of the Major League Soccer founded in 1994 and playing in Gillette Stadium which it shares with the New England Patriots. The Revolution have won a U.S. Open Cup and a SuperLiga Championship, and they have appeared in five MLS finals. In the USL Championship, the second division on the American soccer pyramid, New England is represented by Hartford Athletic which was founded in 2019 and plays its games at Dillon Stadium, and Rhode Island FC, which began play in 2024. The International Tennis Hall of Fame is in Newport, Rhode Island. Transportation Each of the New England states has its own Department of Transportation which plans and develops systems for transport, though some transportation authorities operate across state and municipal lines. The Massachusetts Bay Transportation Authority (MBTA) oversees public transportation in the Greater Boston area. It is the largest such agency and operates throughout eastern Massachusetts and into Rhode Island. The MBTA oversees the oldest subway system (the Tremont Street subway) and the second most-used light rail line (the Green Line) in the United States, as well as one of five remaining trolleybus systems nationwide. Coastal Connecticut makes use of the Metropolitan Transportation Authority of New York due to the connection of that region to New York's economy. The MTA operates the Metro-North Railroad in coordination with the Connecticut Department of Transportation. CTrail is a division of the Connecticut Department of Transportation which operates the Shore Line East along its southern coast, terminating in Old Saybrook and New London. It also operates the Hartford Line, leading south to New Haven and north to Springfield. Commuter rail service is provided north of Springfield to Greenfield, Massachusetts, as part of the Valley Flyer Amtrak route. Amtrak provides interstate rail service throughout New England. Boston is the northern terminus of the Northeast Corridor. The Vermonter connects Vermont to Massachusetts and Connecticut, while the Downeaster links Maine to Boston. The long-distance Lake Shore Limited train has two eastern termini after splitting in Albany, one of which is Boston. This provides rail service on the former Boston and Albany Railroad which runs between its namesake cities. The rest of the Lake Shore Limited continues to New York City. Bus transportation is available in most urban areas and is governed by regional and local authorities. The Pioneer Valley Transit Authority and the MetroWest Regional Transit Authority are examples of public bus transportation which support more suburban and rural communities. South Station in Boston is a major center for bus, rail, and light rail lines. Major interstate highways traversing the region include I-95, I-93, I-91, I-89, I-84, and I-90 (the Massachusetts Turnpike). Logan Airport is the busiest transportation hub in the region in terms of number of passengers and total cargo, opened in 1923 and located in East Boston and Winthrop, Massachusetts. It is a hub for Cape Air and Delta Air Lines, and a focus city for JetBlue. It is the 16th busiest airport in the United States. Other airports in the region include Patrick Leahy Burlington International Airport, Bradley International Airport, Rhode Island T. F. Green International Airport, Manchester–Boston Regional Airport, and Portland International Jetport. See also References Further reading External links Political Historical Maps Culture |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Symmetry_(biology)#Bilateral_symmetry] | [TOKENS: 3463] |
Contents Symmetry in biology Symmetry in biology refers to the symmetry observed in organisms, including plants, animals, fungi, and bacteria. External symmetry can be easily seen by just looking at an organism. For example, the face of a human being has a plane of symmetry down its centre, or a pine cone displays a clear symmetrical spiral pattern. Internal features can also show symmetry, for example the tubes in the human body (responsible for transporting gases, nutrients, and waste products) which are cylindrical and have several planes of symmetry. Biological symmetry can be thought of as a balanced distribution of duplicate body parts or shapes within the body of an organism. Importantly, unlike in mathematics, symmetry in biology is always approximate. For example, plant leaves – while considered symmetrical – rarely match up exactly when folded in half. Symmetry is one class of patterns in nature whereby there is near-repetition of the pattern element, either by reflection or rotation. While sponges and placozoans represent two groups of animals which do not show any symmetry (i.e. are asymmetrical), the body plans of most multicellular organisms exhibit, and are defined by, some form of symmetry. There are only a few types of symmetry which are possible in body plans. These include radial (cylindrical) symmetry, bilateral, biradial and spherical symmetry. Additionally, a yet unclassified and poorly understood group of Ediacaran organisms known as the Rangeomorphs exhibit fractal symmetry. While the classification of viruses as an "organism" remains controversial, viruses also contain icosahedral symmetry. The importance of symmetry is illustrated by the fact that groups of animals have traditionally been defined by this feature in taxonomic groupings. The Radiata, animals with radial symmetry, formed one of the four branches of Georges Cuvier's classification of the animal kingdom. Meanwhile, Bilateria is a taxonomic grouping still used today to represent organisms with embryonic bilateral symmetry. Radial symmetry Organisms with radial symmetry show a repeating pattern around a central axis such that they can be separated into several identical pieces when cut through the central point, much like pieces of a pie. Typically, this involves repeating a body part 4, 5, 6 or 8 times around the axis – referred to as tetramerism, pentamerism, hexamerism and octamerism, respectively. Such organisms exhibit no left or right sides but do have a top and a bottom surface, or a front and a back. Georges Cuvier classified animals with radial symmetry in the taxon Radiata (Zoophytes), which is now generally accepted to be an assemblage of different animal phyla that do not share a single common ancestor (a polyphyletic group). Most radially symmetric animals are symmetrical about an axis extending from the center of the oral surface, which contains the mouth, to the center of the opposite (aboral) end. Animals in the phyla Cnidaria and Echinodermata generally show radial symmetry, although many sea anemones and some corals within the Cnidaria have bilateral symmetry defined by a single structure, the siphonoglyph. Radial symmetry is especially suitable for sessile animals such as the sea anemone, floating animals such as jellyfish, and slow moving organisms such as starfish; whereas bilateral symmetry favours locomotion by generating a streamlined body. Many flowers are also radially symmetric, or "actinomorphic". Roughly identical floral structures – petals, sepals, and stamens – occur at regular intervals around the axis of the flower, which is often the female reproductive organ containing the carpel, style and stigma. Three-fold triradial symmetry was present in Trilobozoa from the Late Ediacaran period. Four-fold tetramerism appears in some jellyfish, such as Aurelia marginalis. This is immediately obvious when looking at the jellyfish due to the presence of four gonads, visible through its translucent body. This radial symmetry is ecologically important in allowing the jellyfish to detect and respond to stimuli (mainly food and danger) from all directions. Flowering plants show five-fold pentamerism, in many of their flowers and fruits. This is easily seen through the arrangement of five carpels (seed pockets) in an apple when cut transversely. Among animals, only the echinoderms such as sea stars, sea urchins, and sea lilies are pentamerous as adults, with five arms arranged around the mouth. Being bilaterian animals, however, they initially develop with mirror symmetry as larvae, then gain pentaradial symmetry later. Hexamerism is found in the corals and sea anemones (class Anthozoa), which are divided into two groups based on their symmetry. The most common corals in the subclass Hexacorallia have a hexameric body plan; their polyps have six-fold internal symmetry and a number of tentacles that is a multiple of six. Octamerism is found in corals of the subclass Octocorallia. These have polyps with eight tentacles and octameric radial symmetry. The octopus, however, has bilateral symmetry, despite its eight arms. Icosahedral symmetry Icosahedral symmetry occurs in an organism which contains 60 subunits generated by 20 faces, each an equilateral triangle, and 12 corners. Within the icosahedron there is 2-fold, 3-fold and 5-fold symmetry. Many viruses, including canine parvovirus, show this form of symmetry due to the presence of an icosahedral viral shell. Such symmetry has evolved because it allows the viral particle to be built up of repetitive subunits consisting of a limited number of structural proteins (encoded by viral genes), thereby saving space in the viral genome. The icosahedral symmetry can still be maintained with more than 60 subunits, but only in multiples of 60. For example, the T=3 Tomato bushy stunt virus has 60x3 protein subunits (180 copies of the same structural protein). Although these viruses are often referred to as 'spherical', they do not show true mathematical spherical symmetry. In the early 20th century, Ernst Haeckel described (Haeckel, 1904) a number of species of Radiolaria, some of whose skeletons are shaped like various regular polyhedra. Examples include Circoporus octahedrus, Circogonia icosahedra, Lithocubus geometricus and Circorrhegma dodecahedra. The shapes of these creatures should be obvious from their names. Tetrahedral symmetry is not present in Callimitra agnesae. Spherical symmetry Spherical symmetry is characterised by the ability to draw an endless, or great but finite, number of symmetry axes through the body. This means that spherical symmetry occurs in an organism if it is able to be cut into two identical halves through any cut that runs through the organism's center. True spherical symmetry is not found in animal body plans. Organisms which show approximate spherical symmetry include the freshwater green alga Volvox. Bacteria are often referred to as having a 'spherical' shape. Bacteria are categorized based on their shapes into three classes: cocci (spherical-shaped), bacillus (rod-shaped) and spirochetes (spiral-shaped) cells. In reality, this is a severe over-simplification as bacterial cells can be curved, bent, flattened, oblong spheroids and many more shapes. Due to the huge number of bacteria considered to be cocci (coccus if a single cell), it is unlikely that all of these show true spherical symmetry. It is important to distinguish between the generalized use of the word 'spherical' to describe organisms at ease, and the true meaning of spherical symmetry. The same situation is seen in the description of viruses – 'spherical' viruses do not necessarily show spherical symmetry, being usually icosahedral. Bilateral symmetry Organisms with bilateral symmetry contain a single plane of symmetry, the sagittal plane, which divides the organism into two roughly mirror image left and right halves – approximate reflectional symmetry. Animals with bilateral symmetry are classified into a large group called the bilateria, which contains 99% of all animals (comprising over 32 phyla and 1 million described species). All bilaterians have some asymmetrical features; for example, the human heart and liver are positioned asymmetrically despite the body having external bilateral symmetry. The bilateral symmetry of bilaterians is a complex trait which develops due to the expression of many genes. The bilateria have two axes of polarity. The first is an anterior–posterior (AP) axis which can be visualised as an imaginary axis running from the head or mouth to the tail or other end of an organism. The second is the dorsal–ventral (DV) axis which runs perpendicular to the AP axis. During development the AP axis is always specified before the DV axis, which is known as the second embryonic axis. The AP axis is essential in defining the polarity of bilateria and allowing the development of a front and back to give the organism direction. The front end encounters the environment before the rest of the body so sensory organs such as eyes tend to be clustered there. This is also the site where a mouth develops since it is the first part of the body to encounter food. Therefore, a distinct head, with sense organs connected to a central nervous system, tends to develop. This pattern of development (with a distinct head and tail) is called cephalization. It is also argued that the development of an AP axis is important in locomotion – bilateral symmetry gives the body an intrinsic direction and allows streamlining to reduce drag. In addition to animals, the flowers of some plants also show bilateral symmetry. Such plants are referred to as zygomorphic and include the orchid (Orchidaceae) and pea (Fabaceae) families, and most of the figwort family (Scrophulariaceae). The leaves of plants also commonly show approximate bilateral symmetry. Biradial symmetry Biradial symmetry is found in organisms which show morphological features (internal or external) of both bilateral and radial symmetry. Unlike radially symmetrical organisms which can be divided equally along many planes, biradial organisms can only be cut equally along two planes. This could represent an intermediate stage in the evolution of bilateral symmetry from a radially symmetric ancestor. The animal group with the most obvious biradial symmetry is the ctenophores. In ctenophores the two planes of symmetry are (1) the plane of the tentacles and (2) the plane of the pharynx. In addition to this group, evidence for biradial symmetry has even been found in the 'perfectly radial' freshwater polyp Hydra (a cnidarian). Biradial symmetry, especially when considering both internal and external features, is more common than originally accounted for. Evolution of symmetry Like all the traits of organisms, symmetry (or indeed asymmetry) evolves due to an advantage to the organism – a process of natural selection. This involves changes in the frequency of symmetry-related genes throughout time. Early flowering plants had radially symmetric flowers but since then many plants have evolved bilaterally symmetrical flowers. The evolution of bilateral symmetry is due to the expression of CYCLOIDEA genes. Evidence for the role of the CYCLOIDEA gene family comes from mutations in these genes which cause a reversion to radial symmetry. The CYCLOIDEA genes encode transcription factors, proteins which control the expression of other genes. This allows their expression to influence developmental pathways relating to symmetry. For example, in Antirrhinum majus, CYCLOIDEA is expressed during early development in the dorsal domain of the flower meristem and continues to be expressed later on in the dorsal petals to control their size and shape. It is believed that the evolution of specialized pollinators may play a part in the transition of radially symmetrical flowers to bilaterally symmetrical flowers. Symmetry is often selected for in the evolution of animals. This is unsurprising since asymmetry is often an indication of unfitness – either defects during development or injuries throughout a lifetime. This is most apparent during mating during which females of some species select males with highly symmetrical features. Additionally, female barn swallows, a species where adults have long tail streamers, prefer to mate with males that have the most symmetrical tails. While symmetry is known to be under selection, the evolutionary history of different types of symmetry in animals is an area of extensive debate. Traditionally it has been suggested that bilateral animals evolved from a radial ancestor. Cnidarians, a phylum containing animals with radial symmetry, are the most closely related group to the bilaterians. Cnidarians are one of two groups of early animals considered to have defined structure, the second being the ctenophores. Ctenophores show biradial symmetry leading to the suggestion that they represent an intermediate step in the evolution of bilateral symmetry from radial symmetry. Interpretations based only on morphology are not sufficient to explain the evolution of symmetry. Two different explanations are proposed for the different symmetries in cnidarians and bilateria. The first suggestion is that an ancestral animal had no symmetry (was asymmetric) before cnidarians and bilaterians separated into different evolutionary lineages. Radial symmetry could have then evolved in cnidarians and bilateral symmetry in bilaterians. Alternatively, the second suggestion is that an ancestor of cnidarians and bilaterians had bilateral symmetry before the cnidarians evolved and became different by having radial symmetry. Both potential explanations are being explored and evidence continues to fuel the debate. Asymmetry Although asymmetry is typically associated with being unfit, some species have evolved to be asymmetrical as an important adaptation. Many members of the phylum Porifera (sponges) have no symmetry, though some are radially symmetric. The presence of these asymmetrical features requires a process of symmetry breaking during development, both in plants and animals. Symmetry breaking occurs at several different levels in order to generate the anatomical asymmetry which we observe. These levels include asymmetric gene expression, protein expression, and activity of cells. For example, left–right asymmetry in mammals has been investigated extensively in the embryos of mice. Such studies have led to support for the nodal flow hypothesis. In a region of the embryo referred to as the node there are small hair-like structures (monocilia) that all rotate together in a particular direction. This creates a unidirectional flow of signalling molecules causing these signals to accumulate on one side of the embryo and not the other. This results in the activation of different developmental pathways on each side, and subsequent asymmetry. Much of the investigation of the genetic basis of symmetry breaking has been done on chick embryos. In chick embryos the left side expresses genes called NODAL and LEFTY2 that activate PITX2 to signal the development of left side structures. Whereas, the right side does not express PITX2 and consequently develops right side structures. A more complete pathway is shown in the image at the side of the page. For more information about symmetry breaking in animals please refer to the left–right asymmetry page. Plants also show asymmetry. For example the direction of helical growth in Arabidopsis, the most commonly studied model plant, shows left-handedness. Interestingly, the genes involved in this asymmetry are similar (closely related) to those in animal asymmetry – both LEFTY1 and LEFTY2 play a role. In the same way as animals, symmetry breaking in plants can occur at a molecular (genes/proteins), subcellular, cellular, tissue and organ level. Fluctuating asymmetry (FA), is a form of biological asymmetry, along with anti-symmetry and direction asymmetry. Fluctuating asymmetry refers to small, random deviations away from perfect bilateral symmetry. This deviation from perfection is thought to reflect the genetic and environmental pressures experienced throughout development, with greater pressures resulting in higher levels of asymmetry. Examples of FA in the human body include unequal sizes (asymmetry) of bilateral features in the face and body, such as left and right eyes, ears, wrists, breasts, testicles, and thighs. Research has exposed multiple factors that are associated with FA. As measuring FA can indicate developmental stability, it can also suggest the genetic fitness of an individual. This can further have an effect on mate attraction and sexual selection, as less asymmetry reflects greater developmental stability and subsequent fitness. Human physical health is also associated with FA. For example, young men with greater FA report more medical conditions than those with lower levels of FA. Multiple other factors can be linked to FA, such as intelligence and personality traits. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Electrical_network] | [TOKENS: 1168] |
Contents Electrical network An electrical network is an interconnection of electrical components (e.g., batteries, resistors, inductors, capacitors, switches, transistors) or a model of such an interconnection, consisting of electrical elements (e.g., voltage sources, current sources, resistances, inductances, capacitances). An electrical circuit is a network consisting of a closed loop, giving a return path for the current. Thus all circuits are networks, but not all networks are circuits (although networks without a closed loop are often referred to as open circuits). A resistive network is a network containing only resistors and ideal current and voltage sources. Analysis of resistive networks is less complicated than analysis of networks containing capacitors and inductors. If the sources are constant (DC) sources, the result is a DC network. The effective resistance and current distribution properties of arbitrary resistor networks can be modeled in terms of their graph measures and geometrical properties. A network that contains active electronic components is known as an electronic circuit. Such networks are generally nonlinear and require more complex design and analysis tools. Classification An active network contains at least one voltage source or current source that can supply energy to the network indefinitely. A passive network does not contain an active source. An active network contains one or more sources of electromotive force. Practical examples of such sources include a battery or a generator. Active elements can inject power to the circuit, provide power gain, and control the current flow within the circuit. Passive networks do not contain any sources of electromotive force. They consist of passive elements like resistors and capacitors. Linear electrical networks, a special type consisting only of sources (voltage or current), linear lumped elements (resistors, capacitors, inductors), and linear distributed elements (transmission lines), have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, and transient response. Passive networks are generally taken to be linear, but there are exceptions. For instance, an inductor with an iron core can be driven into saturation if driven with a large enough current. In this region, the behaviour of the inductor is very non-linear. Discrete passive components (resistors, capacitors and inductors) are called lumped elements because all of their, respectively, resistance, capacitance and inductance is assumed to be located ("lumped") at one place. This design philosophy is called the lumped-element model and networks so designed are called lumped-element circuits. This is the conventional approach to circuit design. At high enough frequencies, or for long enough circuits (such as power transmission lines), the lumped assumption no longer holds because there is a significant fraction of a wavelength across the component dimensions. A new design model, called the distributed-element model, is needed for such cases. Networks designed to this model are called distributed-element circuits. A distributed-element circuit that includes some lumped components is called a semi-lumped design. An example of a semi-lumped circuit is the combline filter. Classification of sources Sources can be classified as independent sources and dependent sources. An ideal independent source maintains the same voltage or current regardless of the other elements present in the circuit. Its value is either constant (DC) or sinusoidal (AC). The strength of voltage or current is not changed by any variation in the connected network. Dependent sources depend upon a particular element of the circuit for delivering the power or voltage or current, depending upon the type of source it is. Applying electrical laws A number of electrical laws apply to all linear resistive networks. These include: Applying these laws results in a set of simultaneous equations that can be solved either algebraically or numerically. The laws can generally be extended to networks containing reactances. They cannot be used in networks that contain nonlinear or time-varying components. Design methods To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Simple linear circuits can be analyzed by hand using complex number theory. In more complex cases, the circuit may be analyzed with specialized computer programs or estimation techniques such as the piecewise-linear model. Circuit simulation software, such as HSPICE (an analog circuit simulator), and languages such as VHDL-AMS and verilog-AMS allow engineers to design circuits without the time, cost and risk of error involved in building circuit prototypes. Network simulation software More complex circuits can be analyzed numerically with software such as SPICE or GNUCAP, or symbolically using software such as SapWin. When faced with a new circuit, the software first tries to find a steady state solution, that is, one where all nodes conform to Kirchhoff's current law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element. Once the steady state solution is found, the operating points of each element in the circuit are known. For a small signal analysis, every non-linear element can be linearized around its operation point to obtain the small-signal estimate of the voltages and currents. This is an application of Ohm's Law. The resulting linear circuit matrix can be solved with Gaussian elimination. Software such as the PLECS interface to Simulink uses piecewise-linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Amal_Kumar_Raychaudhuri] | [TOKENS: 970] |
Contents Amal Kumar Raychaudhuri Amal Kumar Raychaudhuri (14 September 1923 – 18 June 2005) was an Indian physicist, known for his research in general relativity and cosmology. His most significant contribution is the eponymous Raychaudhuri equation, which demonstrates that singularities arise inevitably in general relativity and is a key ingredient in the proofs of the Penrose–Hawking singularity theorems. Raychaudhuri was also revered as a teacher during his tenure at Presidency College, Kolkata. Career Raychaudhuri was born in a Bengali Baidya family coming from Barisal (now in Bangladesh) on 14 September 1923, to Surabala and Sureshchandra Raychaudhuri. He was just a child when the family migrated to Kolkata. He had his early education in Tirthapati Institution and later completed matriculation from Hindu School, Kolkata. In a documentary film made just before his death in 2005, AKR reveals that he was extremely passionate about mathematics right from his schooldays and solving problems would give him immense pleasure. He recalls in the documentary how his grade 9th teacher credited him for discovering a simpler solution for a mathematics problem. May be the fact that his father was a mathematics teacher in a school also inspired him. At the same time, as his father was not so 'successful' so to say, he was discouraged to take up mathematics, his first choice, as honours subject in college. He earned B.Sc. from the Presidency College in 1942 and M.Sc. in 1944 from Science College campus of Calcutta University and he joined Indian Association for the Cultivation of Science (IACS) in 1945 as a research scholar. In 1952, he took a research job with the Indian Association for the Cultivation of Science (IACS), but to his frustration was required to work on the properties of metals rather than general relativity. Despite these adverse pressures, he was able to derive and publish the equation which is now named for him a few years later. Raychaudhuri equation is a key ingredient in the proofs of the Penrose–Hawking singularity theorems. Some years later, having learned that his 1955 paper was highly regarded by notable physicists, such as Pascual Jordan, Raychaudhuri was sufficiently emboldened to submit a doctoral dissertation, and received his Doctor of Science degree at the University of Calcutta (with one of the examiners, Prof John Archibald Wheeler recording special appreciation of the work done) in 1959. In 1961, Raychaudhuri joined the faculty of his alma mater, Presidency College then affiliated with the University of Calcutta, and remained there until his superannuation. He became a well-known scientific figure in the 1970s, and was the subject of a short documentary film completed shortly before his death. Dipayan Pal wrote of Raychaudhuri for Science Reporter (CSIR, NISCAIR) in 2018: In general relativity, the Raychaudhuri equation plays a significant role to explain the space-time singularities and gravitational focusing properties in cosmology. He aimed to address the fundamental question of singularity in the most simple and general form with no reference to any symmetry and any specific property of space-time and energy distribution. The first mention of the term 'Raychaudhuri Equation' appeared in a research paper published in 1965 by George F.R. Ellis and Stephen Hawking. The Raychaudhuri equation hit the zenith of fame as it was a key tool in the hands of young relativists like Stephen Hawking and Roger Penrose in the middle of the late 1960s in their attempt to answer the question on the existence of space-time singularities and to explain the theory of universe. In fact, this equation is important as a fundamental lemma for the Penrose-Hawking singularity theorems. There is such wide acceptability of this equation like other notable equations in physics like 'the Dirac equation and Schroedinger equation' that nobody cares about its origin or date of publication. The Raychaudhuri equation paved the way for later research into the singularity problem. It would find its place in venerable textbooks on general relativity and relativistic cosmology. The equation will stand firm so long Einstein's GTR stands. The equation remains a prime tool to investigate the behaviour of black hole horizons. One would wonder if there is any research work of such grade to have emerged in post-independent India. Honours and recognition Selected Publications Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Broadcast_television] | [TOKENS: 4465] |
Contents Broadcast television systems Broadcast television systems (or terrestrial television systems outside the US and Canada) are the encoding or formatting systems for the transmission and reception of terrestrial television signals. Analog television systems were standardized by the International Telecommunication Union (ITU) in 1961, with each system designated by a letter (A-N) in combination with the color standard used (NTSC, PAL or SECAM) - for example PAL-B, NTSC-M, etc.). These analog systems for TV broadcasting dominated until the 2000s. With the introduction of digital terrestrial television (DTT), they were replaced by four main systems in use around the world: ATSC, DVB, ISDB and DTMB. Analog television systems Every analog television system bar one began as a black-and-white system. Each country, faced with local political, technical, and economic issues, later adopted a color television standard which was grafted onto an existing monochrome system such as CCIR System M, using gaps in the video spectrum (explained below) to allow color transmission information to fit in the existing channels allotted. The grafting of the color transmission standards onto existing monochrome systems permitted existing monochrome television receivers predating the changeover to color television to continue to be operated as monochrome television. Because of this compatibility requirement, color standards added a second signal to the basic monochrome signal, which carries the color information. The color information is called chrominance with the symbol C, while the black and white information is called the luminance with the symbol Y. Monochrome television receivers only display the luminance, while color receivers process both signals. Though in theory any monochrome system could be adopted to a color system, in practice some of the original monochrome systems proved impractical to adapt to color and were abandoned when the switch to color broadcasting was made. All countries used one of three color standards: NTSC, PAL, or SECAM. For example, CCIR System M was often used in conjunction with NTSC standard, to provide color analog television and the two together were known as NTSC-M. A number of experimental and broadcast pre-WW2 systems were tested. The first ones were mechanically based and of very low resolution, sometimes with no sound. Later TV systems were electronic, and usually mentioned by their line number: 375-line (used in Germany, Italy, US), 405-line (used in the UK), 441-line (used in Germany, France, Italy, US) or 567-line (used in the Netherlands). These systems were mostly experimental and national, with no defined international standards, and did not resume broadcasting after the war. An exception was the UK 405-line system, that resumed broadcasts and was the first to be standardized by ITU as System A, remaining in operation until 1985. On an international conference in Stockholm in 1961, the International Telecommunication Union designated standards for broadcast television systems (ITU System Letter Designation). Each standard is designated by a letter (A-M). On VHF bands I, II and III the 405, 625 and 819-line systems could be used: On UHF bands Bands IV and V only 625-line systems were adopted, with the difference being transmission parameters like channel bandwidth. Following further conferences and the introduction of color television, by 1966 each standard was designated by a letter (A-M) in combination with a color standard (NTSC, PAL, SECAM). This completely specifies all of the monaural analog television systems in the world (for example, PAL-B, NTSC-M, etc.). The following table gives the principal characteristics of each standard. Except for lines and frame rates, other units are megahertz (MHz). For historical reasons, some countries use a different video system on UHF than they do on the VHF bands. In a few countries, most notably the United Kingdom, television broadcasting on VHF has been entirely shut down. The British 405-line system A, unlike all the other systems, suppressed the upper sideband rather than the lower—befitting its status as the oldest operating television system to survive into the color era (although was never officially broadcast with color encoding). System A was tested with all three color standards, and production equipment was designed and ready to be built; System A might have survived, as NTSC-A, had the British government not decided to harmonize with the rest of Europe on a 625-line video system, implemented in Britain as PAL-I on UHF only. The French 819 line system E was a post-war effort to advance France's standing in television technology. Its 819 lines were almost high definition even by today's standards. Like the British system A, it was VHF only and remained black & white until its shutdown in 1984 in France and 1985 in Monaco. It was tested with SECAM standard in the early stages, but later the decision was made to adopt color in 625-lines L system only. Thus, France adopted system L both on UHF and VHF networks and abandoned system E. Japan had the earliest working HDTV system (MUSE), with design efforts going back to 1979. The country began broadcasting wideband analog high-definition video signals in the late 1980s using an interlaced resolution of 1,125 lines, supported by the Sony HDVS line of equipment. In many parts of the world, analog television broadcasting has been shut down completely, or in process of shutdown; see Digital television transition for a timeline of the analog shutdown. Ignoring color, all television systems work in essentially the same manner. The monochrome image seen by a camera (later, the luminance component of a color image) is divided into horizontal scan lines, some number of which make up a single image or frame. A monochrome image is theoretically continuous, and thus unlimited in horizontal resolution, but to make television practical, a limit had to be placed on the bandwidth of the television signal, which puts an ultimate limit on the horizontal resolution possible. When color was introduced, this limit necessarily became fixed. All analog television systems are interlaced: alternate rows of the frame are transmitted in sequence, followed by the remaining rows in their sequence. Each half of the frame is called a video field, and the rate at which fields are transmitted is one of the fundamental parameters of a video system. It is related to the utility frequency at which the electricity distribution system operates, to avoid flicker resulting from the beat between the television screen deflection system and nearby mains generated magnetic fields. All digital, or "fixed pixel," displays have progressive scanning and must deinterlace an interlaced source. Use of inexpensive deinterlacing hardware is a typical difference between lower- vs. higher-priced flat panel displays (Plasma display, LCD, etc.). All films and other filmed material shot at 24 frames per second must be transferred to video frame rates using a telecine in order to prevent severe motion jitter effects. Typically, for 25 frame/s formats (European among other countries with 50 Hz mains supply), the content is PAL speedup, while a technique known as "3:2 pulldown" is used for 30 frame/s formats (North America among other countries with 60 Hz mains supply) to match the film frame rate to the video frame rate without speeding up the play back. Analog television signal standards are designed to be displayed on a cathode ray tube (CRT), and so the physics of these devices necessarily controls the format of the video signal. The image on a CRT is painted by a moving beam of electrons which hits a phosphor coating on the front of the tube. This electron beam is steered by a magnetic field generated by powerful electromagnets close to the source of the electron beam. In order to reorient this magnetic steering mechanism, a certain amount of time is required due to the inductance of the magnets; the greater the change, the greater the time it takes for the electron beam to settle in the new spot. For this reason, it is necessary to shut off the electron beam (corresponding to a video signal of zero luminance) during the time it takes to reorient the beam from the end of one line to the beginning of the next (horizontal retrace) and from the bottom of the screen to the top (vertical retrace or vertical blanking interval). The horizontal retrace is accounted for in the time allotted to each scan line, but the vertical retrace is accounted for as phantom lines which are never displayed but which are included in the number of lines per frame defined for each video system. Since the electron beam must be turned off in any case, the result is gaps in the television signal, which can be used to transmit other information, such as test signals or color identification signals. The temporal gaps translate into a comb-like frequency spectrum for the signal, where the teeth are spaced at line frequency and concentrate most of the energy; the space between the teeth can be used to insert a color subcarrier. Broadcasters later developed mechanisms to transmit digital information on the phantom lines, used mostly for teletext and closed captioning: Television images are unique in that they must incorporate regions of the picture with reasonable-quality content, that will never be seen by some viewers.[vague] In a purely analog system, field order is merely a matter of convention. For digitally recorded material it becomes necessary to rearrange the field order when conversion takes place from one standard to another. Another parameter of analog television systems, minor by comparison, is the choice of whether vision modulation is positive or negative. Some of the earliest electronic television systems such as the British 405-line (System A) used positive modulation. It was also used in the two Belgian systems (System C, 625 lines, and System F, 819 lines) and the two French systems (System E, 819 lines, and System L, 625 lines). In positive modulation systems, as in the earlier white facsimile transmission standard, the maximum luminance value is represented by the maximum carrier power; in negative modulation, the maximum luminance value is represented by zero carrier power. All newer analog video systems use negative modulation with the exception of the French System L. Impulse noise, especially from older automotive ignition systems, caused white spots to appear on the screens of television receivers using positive modulation but they could use simple synchronization circuits. Impulse noise in negative-modulation systems appears as dark spots that are less visible, but picture synchronization was seriously degraded when using simple synchronization. The synchronization problem was overcome with the invention of phase-locked synchronization circuits. When these first appeared in Britain in the early 1950s one name used to describe them was "flywheel synchronisation." Older televisions for positive-modulation systems were sometimes equipped with a peak video signal inverter that would turn the white interference spots dark. This was usually user-adjustable with a control on the rear of the television labeled "White Spot Limiter" in Britain or "Antiparasite" in France. If adjusted incorrectly it would turn bright white picture content dark. Most of the positive modulation television systems ceased operation by the mid-1980s. The French System L continued on up to the transition to digital broadcasting. Positive modulation was one of several unique technical features that originally protected the French electronics and broadcasting industry from foreign competition and rendered French TV sets incapable of receiving broadcasts from neighboring countries. Another advantage of negative modulation is that, since the synchronizing pulses represent maximum carrier power, it is relatively easy to arrange the receiver automatic gain control to only operate during sync pulses and thus get a constant amplitude video signal to drive the rest of the TV set. This was not possible for many years with positive modulation as the peak carrier power varied depending on picture content. Modern digital processing circuits have achieved a similar effect but using the front porch of the video signal. Given all of these parameters, the result is a mostly continuous analog signal which can be modulated onto a radio-frequency carrier and transmitted through an antenna. All analog television systems use vestigial sideband modulation, a form of amplitude modulation in which one sideband is partially removed. This reduces the bandwidth of the transmitted signal, enabling narrower channels to be used. In analog television, the analog audio portion of a broadcast is invariably modulated separately from the video. Most commonly, the audio and video are combined at the transmitter before being presented to the antenna, but separate aural and visual antennas can be used. In all cases where negative video is used, FM is used for the standard monaural audio; systems with positive video use AM sound and intercarrier receiver technology cannot be incorporated. Stereo, or more generally multi-channel, audio is encoded using a number of schemes which (except in the French systems) are independent of the video system. The principal systems are NICAM, which uses a digital audio encoding; double-FM (known under a variety of names, notably Zweikanalton, A2 Stereo, West German Stereo, German Stereo or IGR Stereo), in which case each audio channel is separately modulated in FM and added to the broadcast signal; and BTSC (also known as MTS), which multiplexes additional audio channels into the FM audio carrier. All three systems are compatible with monaural FM audio, but only NICAM may be used with the French AM audio systems. Digital television systems The situation with worldwide digital television is much simpler by comparison. Most digital television systems are based on the MPEG transport stream standard, and use the H.262/MPEG-2 Part 2 video codec. They differ significantly in the details of how the transport stream is converted into a broadcast signal, in the video format prior to encoding (or alternatively, after decoding), and in the audio format. This has not prevented the creation of an international standard that includes both major systems, even though they are incompatible in almost every respect. The two principal digital broadcasting systems are ATSC standards, developed by the Advanced Television Systems Committee and adopted as a standard in most of North America, and DVB-T, the Digital Video Broadcast – Terrestrial system used in most of the rest of the world. DVB-T was designed for format compatibility with existing direct broadcast satellite services in Europe (which use the DVB-S standard, and also sees some use in direct-to-home satellite dish providers in North America), and there is also a DVB-C version for cable television. While the ATSC standard also includes support for satellite and cable television systems, operators of those systems have chosen other technologies (principally DVB-S or proprietary systems for satellite and 256QAM replacing VSB for cable). Japan uses a third system, closely related to DVB-T, called ISDB-T, which is compatible with Brazil's SBTVD. The People's Republic of China has developed a fourth system, named DMB-T/H. The terrestrial ATSC system (unofficially ATSC-T) uses a proprietary Zenith-developed modulation called 8-VSB; as the name implies, it is a vestigial sideband technique. Essentially, analog VSB is to regular amplitude modulation as 8VSB is to eight-way quadrature amplitude modulation. This system was chosen specifically to provide for maximum spectral compatibility between existing analog TV and new digital stations in the United States' already-crowded television allocations system, although it is inferior to the other digital systems in dealing with multipath interference; however, it is better at dealing with impulse noise which is especially present on the VHF bands that other countries have discontinued from TV use, but are still used in the U.S. There is also no hierarchical modulation. After demodulation and error-correction, the 8-VSB modulation supports a digital data stream of about 19.39 Mbit/s, enough for one high-definition video stream or several standard-definition services. See Digital subchannel: Technical considerations for more information. On November 17, 2017, the FCC voted 3-2 in favor of authorizing voluntary deployments of ATSC 3.0, which was designed as the successor to the original ATSC "1.0", and issued a Report and Order to that effect. Full-power stations will be required to maintain a simulcast of their channels on an ATSC 1.0-compatible signal if they decide to deploy an ATSC 3.0 service. On cable, ATSC usually uses 256QAM, although some use 16VSB. Both of these double the throughput to 38.78 Mbit/s within the same 6 MHz bandwidth. ATSC is also used over satellite. While these are logically called ATSC-C and ATSC-S, these terms were never officially defined. DTMB is the digital television broadcasting standard of the Mainland China, Hong Kong and Macau. This is a fusion system, which is a compromise of different competing proposing standards from different Chinese Universities, which incorporates elements from DMB-T, ADTB-T and TiMi 3. DVB-T uses coded orthogonal frequency division multiplexing (COFDM), which uses as many as 8000 independent carriers, each transmitting data at a comparatively low rate. This system was designed to provide superior immunity from multipath interference, and has a choice of system variants which allow data rates from 4 MBit/s up to 24 MBit/s. One US broadcaster, Sinclair Broadcasting, petitioned the Federal Communications Commission to permit the use of COFDM instead of 8-VSB, on the theory that this would improve prospects for digital TV reception by households without outside antennas (a majority in the US), but this request was denied. (However, one US digital station, WNYE-DT in New York, was temporarily converted to COFDM modulation on an emergency basis for datacasting information to emergency services personnel in lower Manhattan in the aftermath of the September 11 terrorist attacks). DVB-S is the original Digital Video Broadcasting forward error coding and modulation standard for satellite television and dates back to 1995. It is used via satellites serving every continent of the world, including North America. DVB-S is used in both MCPC and SCPC modes for broadcast network feeds, as well as for direct broadcast satellite services like Sky and Freesat in the British Isles, Sky Deutschland and HD+ in Germany and Austria, TNT Sat/Fransat and CanalSat in France, Dish Network in the US, and Bell Satellite TV in Canada. The MPEG transport stream delivered by DVB-S is mandated as MPEG-2. DVB-C stands for Digital Video Broadcasting - Cable and it is the DVB European consortium standard for the broadcast transmission of digital television over cable. This system transmits an MPEG-2 family digital audio/video stream, using a QAM modulation with channel coding. ISDB is very similar to DVB, however it is broken into 13 subchannels. Twelve are used for TV, while the last serves either as a guard band, or for the 1seg (ISDB-H) service. Like the other DTV systems, the ISDB types differ mainly in the modulations used, due to the requirements of different frequency bands. The 12 GHz band ISDB-S uses PSK modulation, 2.6 GHz band digital sound broadcasting uses CDM and ISDB-T (in VHF and/or UHF band) uses COFDM with PSK/QAM. It was developed in Japan with MPEG-2, and is now used in Brazil with MPEG-4. Unlike other digital broadcast systems, ISDB includes digital rights management to restrict recording of programming. Line count As interlaced systems require accurate positioning of scanning lines, it is important to make sure that the horizontal and vertical timebase are in a precise ratio. This is accomplished by passing the one through a series of electronic divider circuits to produce the other. Each division is by a prime number. Therefore, there has to be a straightforward mathematical relationship between the line and field frequencies, the latter being derived by dividing down from the former. Technology constraints of the 1930s meant that this division process could only be done using small integers, preferably no greater than 7, for good stability. The number of lines was odd because of 2:1 interlace. The 405 line system used a vertical frequency of 50 Hz (Standard AC mains supply frequency in Britain) and a horizontal one of 10,125 Hz (50 × 405 ÷ 2) Conversion from one system to another system Converting between different numbers of lines and different frequencies of fields/frames in video pictures is not an easy task. Perhaps the most technically challenging conversion to make is from any of the 625-line, 25-frame/s systems to system M, which has 525-lines at 29.97 frames per second. Historically this required a frame store to hold those parts of the picture not actually being output (since the scanning of any point was not time coincident). In more recent times, conversion of standards is a relatively easy task for a computer. Aside from the line count being different, it's easy to see that generating 59.94 fields every second from a format that has only 50 fields might pose some interesting problems. Every second, an additional 10 fields must be generated seemingly from nothing. The conversion has to create new frames (from the existing input) in real time. There are several methods used to do this, depending on the desired cost and conversion quality. The simplest possible converters simply drop every 5th line from every frame (when converting from 625 to 525) or duplicate every 4th line (when converting from 525 to 625), and then duplicate or drop some of those frames to make up the difference in frame rate. More complex systems include inter-field interpolation, adaptive interpolation, and phase correlation. See also Transmission technology standards Defunct analog systems Analog television systems Analog television system audio Digital television systems History References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jason_Rubin] | [TOKENS: 1859] |
Contents Jason Rubin Jason Rubin (born 1970) is an American video game director, writer, and comic book creator. He is best known for the Crash Bandicoot and Jak and Daxter series of games which were produced by Naughty Dog, the game development studio he co-founded with partner and childhood friend Andy Gavin in 1986. He was the president of THQ before its closure due to bankruptcy on January 23, 2013. Rubin is the vice president of Metaverse Content at Meta Platforms. Career Rubin and Andy Gavin formed Naughty Dog in 1984. Later that year, they published their first game together — a budgetware title called Ski Crazed. In 1989, Rubin and Gavin sold their first game to Electronic Arts: a role-playing game called Keef the Thief. He took a brief hiatus from school and game design to move to Los Angeles and attempt a career as a screenwriter, but after little success, he returned to school and game design. While Gavin was an undergraduate in Haverford College and Rubin was attending the University of Michigan, they collaborated with each other on their next title: a role-playing game called Rings of Power. The game began as a PC title, but during meetings at Electronic Arts Gavin spotted a reverse engineered Sega Genesis, pitched a slightly modified version of the title to Trip Hawkins, and the title became the duo's first console game. Rings of Power still has a cult following today. After much persuasion from Hawkins, Rubin and Gavin took a leap of faith and started designing Way of the Warrior, which was heavily inspired by Mortal Kombat, for the 3DO console. They demoed the game at CES and received interest from Skip Paul, former chairman of Atari's Coin-Op division and then head of the new Universal Interactive Studios. Skip signed the pair to a three title development deal at Universal, moving them out to the Universal Studios lot and introducing them to Mark Cerny, who worked with the pair on the design of their next title, which was a "Donkey Kong Country-inspired" 3D platformer called Crash Bandicoot. Crash Bandicoot turned out to be an enormous success, and Sony used the main character as their unofficial PlayStation mascot for several years. Due to the impressive visuals which the developer was able to achieve from the PlayStation console, the game served as a quality benchmark that all other game developers aimed to match, and the series spawned three sequels by Naughty Dog selling over 26 million units.[citation needed] The series continues with other development teams, having sold more than 40 million units worldwide.[citation needed] After their success with Crash Bandicoot, Rubin and Gavin began working on Jak and Daxter, a franchise that sold 9 million units through the various Naughty Dog incarnations.[citation needed] The series continued with other developers and as of 2017 had sold 15 million copies sold worldwide. Before Jak and Daxter's release, Sony purchased Naughty Dog, which became a wholly owned subsidiary of Sony Computer Entertainment America in 2001. As a result, Jak and Daxter: The Precursor Legacy was developed exclusively for the PlayStation 2. In their 18 years running Naughty Dog, they created fourteen original games including Math Jam (1985), Ski Crazed (1986), Dream Zone (1987), Keef the Thief (1989), Rings of Power (1991), Way of the Warrior (1994), Crash Bandicoot (1996), Crash Bandicoot 2: Cortex Strikes Back (1997), Crash Bandicoot: Warped (1998), Crash Team Racing (1999), Jak and Daxter: The Precursor Legacy (2001), Jak II (2003), Jak 3 (2004) and Jak X: Combat Racing (2005). Together these games have sold over 35 million units and generated over $1 billion in revenue. Just days after making a controversial speech at 2004's D.I.C.E. Summit that criticized publishers for not recognizing and promoting talent responsible for creating games, Rubin publicly announced his departure from Naughty Dog. On May 29, 2012, Rubin joined the struggling video game publisher THQ as president, and was responsible for all of THQ's worldwide product development, marketing and publishing operations. At the time Rubin joined THQ, the company had laid off hundreds of its employees and the stock had lost over 99% of its value from its high. According to Game Industry International, "placing Jason Rubin at the company's helm was unquestionably a good move — the Naughty Dog founder has an enviable track record and quite rightly commands the respect of the industry —, but by the time he took the role, THQ's stock had already crashed and layoffs were well underway. The company was mortally wounded; Rubin's failure to resuscitate his terminally ill patient should not reflect in any way on his own talents and abilities". To save the teams and products management took the company through a restructuring. As part of that process, THQ filed for Chapter 11 with the intention to sell off its assets at auction. Soon after, THQ management announced a stalking horse bid for the company by Clear Lake Capital for $60 million. Handling the sale of THQ was Centerview Partners Skip Paul, a former colleague of Jason Rubin. Creditors said the proposed sale of THQ in bankruptcy court benefited current THQ management, including Rubin. Early creditor objections and court documents were not kind to THQ management. Though not as widely publicized as the initial criticism, Judge Walwrath put an end to the entire mismanagement line of argument when she called it a "conspiracy theory" on the record. Additionally, same Creditors that made the initial accusations ultimately took the unusual step of releasing THQ Management, including Rubin, of any malfeasance in the company's Official Plan of Liquidation Rubin's public statements made at the time are clear. Management was always open to, and actively seeking, higher bidders at the same time as they tried to hold the company together, both for the benefit of the Company and the Creditors: Our Chapter 11 process allows for other bidders to make competing offers for THQ. So while we are extremely excited about the Clearlake [stalking horse] opportunity, we won't be able to say that the deal is done for a month or so. Whatever happens, the teams and products look likely to end up together and in good hands. That means you can still pre-order Metro: Last Light, Company of Heroes 2, and South Park: The Stick of Truth. Our teams are still working on those titles as you read this, and all other rumored titles, like the fourth Saints Row, the Homefront sequel, and a lot more are also still in the works. — Jason Rubin, THQ Press Release Judge Mary F. Walrath decided to have an auction for the individual assets, and competing offers for the separate parts of THQ prevailed. Though many employees lost their jobs in the bankruptcy, the development teams at Relic (bought by Sega), Volition (bought by Koch Media), and THQ Montreal (purchased by Ubisoft) remained intact, as did much of Vigil which became Crytek USA, and all of the THQ products in the works survived the bankruptcy have come or are scheduled to come out soon. In December 2012, THQ partnered up with The Humble Bundle Team at Wolfire Games to make the Humble THQ Bundle raising over 5 million dollars, much of it going to charity. Rubin donated over $10,000 to charity as part of the event. During E3 2014 it was announced that Rubin joined Oculus VR, heading up the Oculus first-party content initiatives in Seattle, San Francisco, Menlo Park, Dallas and Irvine. In 2021, following Oculus parent company Facebook's rebranding as Meta, Rubin became VP of Metaverse Content, leading the company's VR and Metaverse Content production teams, the internal Studios, Publishing, and Developer Ecosystem teams. Other projects Rubin also created two comic book series. The Iron Saint, originally known as Iron and the Maiden, was published by Aspen Comics, and including artwork designed by artists as Joe Madureira, Jeff Matsuda, Francis Manapul and Joel Gomez. "Mysterious Ways" was published by TopCow Comics and includes artwork from Tyler Kirkham. Rubin also co-founded an Internet startup called Flektor with Naughty Dog co-founder Andy Gavin and former HBO executive Jason Kay. In May 2007, the company was sold to Fox Interactive Media, which is a division of News Corp. Fox described the company as: "a next-generation Web site that provides users with a suite of Web-based tools to transform their photos and videos into dynamic slideshows, postcards, live interactive presentations and video mash-ups." In October 2007, Flektor partnered with its sister company, Myspace, and MTV to provide instant audience feedback via polls for the interactive MySpace / MTV Presidential Dialogues series with then-presidential candidate Senator Barack Obama. Video games References |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.