text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Hartle%E2%80%93Thorne_metric] | [TOKENS: 271] |
Contents Hartle–Thorne metric The Hartle–Thorne metric is an approximate solution of the vacuum Einstein field equations of general relativity that describes the exterior of a slowly and rigidly rotating, stationary and axially symmetric body. The metric was found by James Hartle and Kip Thorne in the 1960s to study the spacetime outside neutron stars, white dwarfs and supermassive stars. It can be shown that it is an approximation to the Kerr metric (which describes a rotating black hole) when the quadrupole moment is set as q = − a 2 a M 3 {\displaystyle q=-a^{2}aM^{3}} , which is the correct value for a black hole but not, in general, for other astrophysical objects. Metric Up to second order in the angular momentum J {\displaystyle J} , mass M {\displaystyle M} and quadrupole moment q {\displaystyle q} , the metric in spherical coordinates is given by where P 2 = 3 cos 2 θ − 1 2 . {\displaystyle P_{2}={\frac {3\cos ^{2}\theta -1}{2}}.} See also References This relativity-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/David_Hilbert] | [TOKENS: 5653] |
Contents David Hilbert David Hilbert (/ˈhɪlbərt/; German: [ˈdaːvɪt ˈhɪlbɐt]; 23 January 1862 – 14 February 1943) was a German mathematician and philosopher of mathematics and one of the most influential mathematicians of his time. Hilbert discovered and developed a broad range of fundamental ideas including invariant theory, the calculus of variations, commutative algebra, algebraic number theory, the foundations of geometry, spectral theory of operators and its application to integral equations, mathematical physics, and the foundations of mathematics (particularly proof theory). He adopted and defended Georg Cantor's set theory and transfinite numbers. In 1900, he presented a collection of problems that set a course for mathematical research of the 20th century. Hilbert and his students contributed to establishing rigor and developed important tools used in modern mathematical physics. He was a co-founder of proof theory and mathematical logic. Life Hilbert, the first of two children and only son of Otto, a county judge, and Maria Therese Hilbert (née Erdtmann), the daughter of a merchant, was born in the Province of Prussia, Kingdom of Prussia, either in Königsberg, now Kaliningrad, (according to Hilbert's own statement) or in Wehlau (known since 1946 as Znamensk) near Königsberg where his father worked at the time of his birth. His paternal grandfather was David Hilbert, a judge and Geheimrat. His mother Maria had an interest in philosophy, astronomy and prime numbers, while his father Otto taught him Prussian virtues. After his father became a city judge, the family moved to Königsberg. David's sister, Elise, was born when he was six. He began his schooling aged eight, two years later than the usual starting age. In late 1872, Hilbert entered the Friedrichskolleg Gymnasium (Collegium fridericianum, the same school that Immanuel Kant had attended 140 years before); but, after an unhappy period, he transferred to (late 1879) and graduated from (early 1880) the more science-oriented Wilhelm Gymnasium. Upon graduation, in autumn 1880, Hilbert enrolled at the University of Königsberg, the "Albertina". In early 1882, Hermann Minkowski (two years younger than Hilbert and also a native of Königsberg but had gone to Berlin for three semesters), returned to Königsberg and entered the university. Hilbert developed a lifelong friendship with the shy, gifted Minkowski. In 1884, Adolf Hurwitz arrived from Göttingen as an Extraordinarius (i.e., an associate professor). An intense and fruitful scientific exchange among the three began, and Minkowski and Hilbert especially would exercise a reciprocal influence over each other at various times in their scientific careers. Hilbert obtained his doctorate in 1885, with a dissertation, written under Ferdinand von Lindemann, titled Über invariante Eigenschaften spezieller binärer Formen, insbesondere der Kugelfunktionen ("On the invariant properties of special binary forms, in particular the spherical harmonic functions"). Hilbert remained at the University of Königsberg as a Privatdozent (senior lecturer) from 1886 to 1895. In 1895, as a result of intervention on his behalf by Felix Klein, he obtained the position of Professor of Mathematics at the University of Göttingen. During the Klein and Hilbert years, Göttingen became the preeminent institution in the mathematical world. He remained there for the rest of his life. Among Hilbert's students were Hermann Weyl, chess champion Emanuel Lasker, Ernst Zermelo, and Carl Gustav Hempel. John von Neumann was his assistant. At the University of Göttingen, Hilbert was surrounded by a social circle of some of the most important mathematicians of the 20th century, such as Emmy Noether and Alonzo Church. Among his 69 Ph.D. students in Göttingen were many who later became famous mathematicians, including (with date of thesis): Otto Blumenthal (1898), Felix Bernstein (1901), Hermann Weyl (1908), Richard Courant (1910), Erich Hecke (1910), Hugo Steinhaus (1911), and Wilhelm Ackermann (1925). Between 1902 and 1939 Hilbert was editor of the Mathematische Annalen, the leading mathematical journal of the time. He was elected an International Member of the United States National Academy of Sciences in 1907. In 1892, Hilbert married Käthe Jerosch (1864–1945), who was the daughter of a Königsberg merchant, "an outspoken young lady with an independence of mind that matched [Hilbert's]." While at Königsberg, they had their one child, Franz Hilbert (1893–1969). Franz suffered throughout his life from mental illness, and after he was admitted into a psychiatric clinic, Hilbert said, "From now on, I must consider myself as not having a son." His attitude toward Franz brought Käthe considerable sorrow. Hilbert considered the mathematician Hermann Minkowski to be his "best and truest friend". Hilbert was baptized and raised a Calvinist in the Prussian Evangelical Church.[a] He later left the Church and became an agnostic.[b] He also argued that mathematical truth was independent of the existence of God or other a priori assumptions.[c][d] When Galileo Galilei was criticized for failing to stand up for his convictions on the Heliocentric theory, Hilbert objected: "But [Galileo] was not an idiot. Only an idiot could believe that scientific truth needs martyrdom; that may be necessary in religion, but scientific results prove themselves in due time."[e] Like Albert Einstein, Hilbert had closest contacts with the Berlin Group, whose leading founders had studied under Hilbert in Göttingen (Kurt Grelling, Hans Reichenbach, and Walter Dubislav). Around 1925, Hilbert developed pernicious anemia, a then-untreatable vitamin deficiency of which the primary symptom is exhaustion; his assistant Eugene Wigner described him as subject to "enormous fatigue" and how he "seemed quite old", and that even after eventually being diagnosed and treated, he "was hardly a scientist after 1925, and certainly not a Hilbert". Hilbert was elected to the American Philosophical Society in 1932. Hilbert lived to see the Nazis purge many of the prominent faculty members at University of Göttingen in 1933. Those forced out included Hermann Weyl (who had taken Hilbert's chair when he retired in 1930), Emmy Noether, and Edmund Landau. One who had to leave Germany, Paul Bernays, had collaborated with Hilbert in mathematical logic, and co-authored with him the important book Grundlagen der Mathematik (which eventually appeared in two volumes, in 1934 and 1939). This was a sequel to the Hilbert–Ackermann book Principles of Mathematical Logic (1928). Hermann Weyl's successor was Helmut Hasse. About a year after the purge, Hilbert attended a banquet and was seated next to the new Minister of Education, Bernhard Rust. Rust asked whether "the Mathematical Institute really suffered so much because of the departure of the Jews". Hilbert replied: "Suffered? It doesn't exist any longer, does it?" By the time Hilbert died in 1943, the Nazis had nearly completely restaffed the university, as many of the former faculty had either been Jewish or married to Jews. Hilbert's funeral was attended by fewer than a dozen people, only two of whom were fellow academics, among them Arnold Sommerfeld, a theoretical physicist and also a native of Königsberg. News of his death only became known to the wider world several months after he died. The epitaph on his tombstone in Göttingen consists of the famous lines he spoke at the conclusion of his retirement address to the Society of German Scientists and Physicians on 8 September 1930. The words were given in response to the Latin maxim: "Ignoramus et ignorabimus" or "We do not know and we shall not know": Wir müssen wissen. Wir werden wissen. We must know. We shall know. The day before Hilbert pronounced these phrases at the 1930 annual meeting of the Society of German Scientists and Physicians, Kurt Gödel—in a round table discussion during the Conference on Epistemology held jointly with the Society meetings—tentatively announced the first expression of his incompleteness theorem.[f] Gödel's incompleteness theorems show that even elementary axiomatic systems such as Peano arithmetic are either self-contradicting or contain logical propositions that are impossible to prove or disprove within that system. Contributions to mathematics and physics Hilbert's first work on invariant functions led him to the demonstration in 1888 of his famous finiteness theorem. Twenty years earlier, Paul Gordan had demonstrated the theorem of the finiteness of generators for binary forms using a complex computational approach. Attempts to generalize his method to functions with more than two variables failed because of the enormous difficulty of the calculations involved. To solve what had become known in some circles as Gordan's Problem, Hilbert realized that it was necessary to take a completely different path. As a result, he demonstrated Hilbert's basis theorem, showing the existence of a finite set of generators, for the invariants of quantics in any number of variables, but in an abstract form. That is, while demonstrating the existence of such a set, it was not a constructive proof—it did not display "an object"—but rather, it was an existence proof and relied on use of the law of excluded middle in an infinite extension. Hilbert sent his results to the Mathematische Annalen. Gordan, the house expert on the theory of invariants for the Mathematische Annalen, could not appreciate the revolutionary nature of Hilbert's theorem and rejected the article, criticizing the exposition because it was insufficiently comprehensive. His comment was: Das ist nicht Mathematik. Das ist Theologie. This is not Mathematics. This is Theology. Klein, on the other hand, recognized the importance of the work, and guaranteed that it would be published without any alterations. Encouraged by Klein, Hilbert extended his method in a second article, providing estimations on the maximum degree of the minimum set of generators, and he sent it once more to the Annalen. After having read the manuscript, Klein wrote to him, saying: Without doubt this is the most important work on general algebra that the Annalen has ever published. Later, after the usefulness of Hilbert's method was universally recognized, Gordan himself would say: I have convinced myself that even theology has its merits. For all his successes, the nature of his proof created more trouble than Hilbert could have imagined. Although Kronecker had conceded, Hilbert would later respond to others' similar criticisms that "many different constructions are subsumed under one fundamental idea"—in other words (to quote Reid): "Through a proof of existence, Hilbert had been able to obtain a construction"; "the proof" (i.e. the symbols on the page) was "the object". Not all were convinced. While Kronecker would die soon afterwards, his constructivist philosophy would continue with the young Brouwer and his developing intuitionist "school", much to Hilbert's torment in his later years. Indeed, Hilbert would lose his "gifted pupil" Weyl to intuitionism—"Hilbert was disturbed by his former student's fascination with the ideas of Brouwer, which aroused in Hilbert the memory of Kronecker". Brouwer the intuitionist in particular opposed the use of the Law of Excluded Middle over infinite sets (as Hilbert had used it). Hilbert responded: Taking the Principle of the Excluded Middle from the mathematician ... is the same as ... prohibiting the boxer the use of his fists. In the subject of algebra, a field is called algebraically closed if and only if every polynomial over it has a root in it. Under this condition, Hilbert gave a criterion for when a collection of polynomials ( p λ ) λ ∈ Λ {\displaystyle (p_{\lambda })_{\lambda \in \Lambda }} of n {\displaystyle n} variables has a common root: This is the case if and only if there do not exist polynomials q 1 , … , q k {\displaystyle q_{1},\ldots ,q_{k}} and indices λ 1 , … , λ k {\displaystyle \lambda _{1},\ldots ,\lambda _{k}} such that This result is known as the Hilbert root theorem, or "Hilberts Nullstellensatz" in German. He also proved that the correspondence between vanishing ideals and their vanishing sets is bijective between affine varieties and radical ideals in C [ x 1 , … , x n ] {\displaystyle \mathbb {C} [x_{1},\ldots ,x_{n}]} . In 1890, Giuseppe Peano had published an article in the Mathematische Annalen describing the historically first space-filling curve. In response, Hilbert designed his own construction of such a curve, which is now called the Hilbert curve. Approximations to this curve are constructed iteratively according to the replacement rules in the first picture of this section. The curve itself is then the pointwise limit. The text Grundlagen der Geometrie (tr.: Foundations of Geometry) published by Hilbert in 1899 proposes a formal set, called Hilbert's axioms, substituting for the traditional axioms of Euclid. They avoid weaknesses identified in those of Euclid, whose works at the time were still used textbook-fashion. It is difficult to specify the axioms used by Hilbert without referring to the publication history of the Grundlagen since Hilbert changed and modified them several times. The original monograph was quickly followed by a French translation, in which Hilbert added V.2, the Completeness Axiom. An English translation, authorized by Hilbert, was made by E.J. Townsend and copyrighted in 1902. This translation incorporated the changes made in the French translation and so is considered to be a translation of the 2nd edition. Hilbert continued to make changes in the text and several editions appeared in German. The 7th edition was the last to appear in Hilbert's lifetime. New editions followed the 7th, but the main text was essentially not revised.[g] Hilbert's approach signaled the shift to the modern axiomatic method. In this, Hilbert was anticipated by Moritz Pasch's work from 1882. Axioms are not taken as self-evident truths. Geometry may treat things, about which we have powerful intuitions, but it is not necessary to assign any explicit meaning to the undefined concepts. The elements, such as point, line, plane, and others, could be substituted, as Hilbert is reported to have said to Schoenflies and Kötter, by tables, chairs, glasses of beer and other such objects. It is their defined relationships that are discussed. Hilbert first enumerates the undefined concepts: point, line, plane, lying on (a relation between points and lines, points and planes, and lines and planes), betweenness, congruence of pairs of points (line segments), and congruence of angles. The axioms unify both the plane geometry and solid geometry of Euclid in a single system. Hilbert put forth a highly influential list consisting of 23 unsolved problems at the International Congress of Mathematicians in Paris in 1900. This is generally reckoned as the most successful and deeply considered compilation of open problems ever to be produced by an individual mathematician.[by whom?] After reworking the foundations of classical geometry, Hilbert could have extrapolated to the rest of mathematics. His approach differed from the later "foundationalist" Russell–Whitehead or "encyclopedist" Nicolas Bourbaki, and from his contemporary Giuseppe Peano. The mathematical community as a whole could engage in problems of which he had identified as crucial aspects of important areas of mathematics. The problem set was launched as a talk, "The Problems of Mathematics", presented during the course of the Second International Congress of Mathematicians, held in Paris. The introduction of the speech that Hilbert gave said: Who of us would not be glad to lift the veil behind which the future lies hidden; to cast a glance at the next advances of our science and at the secrets of its development during future centuries ? What particular goals will there be toward which the leading mathematical spirits of coming generations will strive ? What new methods and new facts in the wide and rich field of mathematical thought will the new centuries disclose? He presented fewer than half the problems at the Congress, which were published in the acts of the Congress. In a subsequent publication, he extended the panorama, and arrived at the formulation of the now-canonical 23 Problems of Hilbert (see also Hilbert's twenty-fourth problem). The full text is important, since the exegesis of the questions still can be a matter of debate when it is asked how many have been solved. Some of these were solved within a short time. Others have been discussed throughout the 20th century, with a few now taken to be unsuitably open-ended to come to closure. Some continue to remain challenges. The following are the headers for Hilbert's 23 problems as they appeared in the 1902 translation in the Bulletin of the American Mathematical Society. In an account that had become standard by the mid-century, Hilbert's problem set was also a kind of manifesto that opened the way for the development of the formalist school, one of three major schools of mathematics of the 20th century. According to the formalist, mathematics is manipulation of symbols according to agreed upon formal rules. It is therefore an autonomous activity of thought. In 1920, Hilbert proposed a research project in metamathematics that became known as Hilbert's program. He wanted mathematics to be formulated on a solid and complete logical foundation. He believed that in principle this could be done by showing that: He seems to have had both technical and philosophical reasons for formulating this proposal. It affirmed his dislike of what had become known as the ignorabimus, still an active issue in his time in German thought, and traced back in that formulation to Emil du Bois-Reymond. This program is still recognizable in the most popular philosophy of mathematics, where it is usually called formalism. For example, the Bourbaki group adopted a watered-down and selective version of it as adequate to the requirements of their twin projects of (a) writing encyclopedic foundational works, and (b) supporting the axiomatic method as a research tool. This approach has been successful and influential in relation with Hilbert's work in algebra and functional analysis, but has failed to engage in the same way with his interests in physics and logic. Hilbert wrote in 1919: We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules. Rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Hilbert published his views on the foundations of mathematics in the 2-volume work, Grundlagen der Mathematik. Hilbert and the mathematicians who worked with him in his enterprise were committed to the project. His attempt to support axiomatized mathematics with definitive principles, which could banish theoretical uncertainties, ended in failure. Gödel demonstrated that any consistent formal system that is sufficiently powerful to express basic arithmetic cannot prove its own completeness using only its own axioms and rules of inference. In 1931, his incompleteness theorem showed that Hilbert's grand plan was impossible as stated. The second point cannot in any reasonable way be combined with the first point, as long as the axiom system is genuinely finitary. Nevertheless, the subsequent achievements of proof theory at the very least clarified consistency as it relates to theories of central concern to mathematicians. Hilbert's work had started logic on this course of clarification; the need to understand Gödel's work then led to the development of recursion theory and then mathematical logic as an autonomous discipline in the 1930s. The basis for later theoretical computer science, in the work of Alonzo Church and Alan Turing, also grew directly out of this "debate". Around 1909, Hilbert dedicated himself to the study of differential and integral equations; his work had direct consequences for important parts of modern functional analysis. In order to carry out these studies, Hilbert introduced the concept of an infinite dimensional Euclidean space, later called Hilbert space. His work in this part of analysis provided the basis for important contributions to the mathematics of physics in the next two decades, though from an unanticipated direction. Later on, Stefan Banach amplified the concept, defining Banach spaces. Hilbert spaces are an important class of objects in the area of functional analysis, particularly of the spectral theory of self-adjoint linear operators, that grew up around it during the 20th century. Until 1912, Hilbert was almost exclusively a pure mathematician. When planning a visit from Bonn, where he was immersed in studying physics, his fellow mathematician and friend Hermann Minkowski joked he had to spend 10 days in quarantine before being able to visit Hilbert. In fact, Minkowski seems responsible for most of Hilbert's physics investigations prior to 1912, including their joint seminar on the subject in 1905. In 1912, three years after his friend's death, Hilbert turned his focus to the subject almost exclusively. He arranged to have a "physics tutor" for himself. He started studying kinetic gas theory and moved on to elementary radiation theory and the molecular theory of matter. Even after the war started in 1914, he continued seminars and classes where the works of Albert Einstein and others were followed closely. By 1907, Einstein had framed the fundamentals of the theory of gravity, but then struggled for nearly 8 years to put the theory into its final form. Meeting Emmy Noether at Göttingen was instrumental in his breakthrough. By early summer 1915, Hilbert's interest in physics had focused on general relativity, and he invited Einstein to Göttingen to deliver a week of lectures on the subject. Einstein received an enthusiastic reception at Göttingen. Over the summer, Einstein learned that Hilbert was also working on the field equations and redoubled his own efforts. During November 1915, Einstein published several papers culminating in The Field Equations of Gravitation (see Einstein field equations).[h] Nearly simultaneously, Hilbert published "The Foundations of Physics", an axiomatic derivation of the field equations (see Einstein–Hilbert action). Hilbert fully credited Einstein as the originator of the theory and no public priority dispute concerning the field equations ever arose between the two men during their lives.[i] See more at priority. Additionally, Hilbert's work anticipated and assisted several advances in the mathematical formulation of quantum mechanics. His work was a key aspect of Hermann Weyl and John von Neumann's work on the mathematical equivalence of Werner Heisenberg's matrix mechanics and Erwin Schrödinger's wave equation, and his namesake Hilbert space plays an important part in quantum theory. In 1926, von Neumann showed that, if quantum states were understood as vectors in Hilbert space, they would correspond with both Schrödinger's wave function theory and Heisenberg's matrices.[j] Throughout this immersion in physics, Hilbert worked on putting rigor into the mathematics of physics. While highly dependent on higher mathematics, physicists tended to be "sloppy" with it. To a pure mathematician like Hilbert, this was both ugly and difficult to understand. As he began to understand physics and how physicists were using mathematics, he developed a coherent mathematical theory for what he found – most importantly in the area of integral equations. When his colleague Richard Courant wrote the now classic Methoden der mathematischen Physik (Methods of Mathematical Physics) including some of Hilbert's ideas, he added Hilbert's name as author even though Hilbert had not directly contributed to the writing. Hilbert said "Physics is too hard for physicists", implying that the necessary mathematics was generally beyond them; the Courant–Hilbert book made it easier for them. Hilbert unified the field of algebraic number theory with his 1897 treatise Zahlbericht (literally "report on numbers"). He also resolved a significant number-theory problem formulated by Waring in 1770. As with the finiteness theorem, he used an existence proof that shows there must be solutions for the problem rather than providing a mechanism to produce the answers. He then had little more to publish on the subject; but the emergence of Hilbert modular forms in the dissertation of a student means his name is further attached to a major area. He made a series of conjectures on class field theory. The concepts were highly influential, and his own contribution lives on in the names of the Hilbert class field and of the Hilbert symbol of local class field theory. Results were mostly proved by 1930, after work by Teiji Takagi.[k] Hilbert did not work in the central areas of analytic number theory, but his name has become known for the Hilbert–Pólya conjecture, for reasons that are anecdotal. Ernst Hellinger, a student of Hilbert, once told André Weil that Hilbert had announced in his seminar in the early 1900s that he expected the proof of the Riemann Hypothesis would be a consequence of Fredholm's work on integral equations with a symmetric kernel. Works His collected works (Gesammelte Abhandlungen) have been published several times. The original versions of his papers contained "many technical errors of varying degree"; when the collection was first published, the errors were corrected and it was found that this could be done without major changes in the statements of the theorems, with one exception—a claimed proof of the continuum hypothesis. The errors were nonetheless so numerous and significant that it took Olga Taussky-Todd three years to make the corrections. See also Footnotes The Hilberts had by this time [around 1902] left the Reformed Protestant Church in which they had been baptized and married. It was told in Göttingen that when [David Hilbert's son] Franz had started to school he could not answer the question, "What religion are you?" (1970, p. 91) Citations Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/SIGNAL_(programming_language)] | [TOKENS: 736] |
Contents SIGNAL (programming language) SIGNAL is a programming language based on synchronized dataflow (flows + synchronization): a process is a set of equations on elementary flows describing both data and control. The SIGNAL formal model provides the capability to describe systems with several clocks (polychronous systems) as relational specifications. Relations are useful as partial specifications and as specifications of non-deterministic devices (for instance a non-deterministic bus) or external processes (for instance an unsafe car driver). Using SIGNAL allows one to specify an application, to design an architecture, to refine detailed components down to RTOS[clarification needed] or hardware description. The SIGNAL model supports a design methodology which goes from specification to implementation, from abstraction to concretization, from synchrony to asynchrony. SIGNAL has been mainly developed in INRIA Espresso team since the 1980s, at the same time as similar programming languages, Esterel and Lustre. A brief history The SIGNAL language was first designed for signal processing applications in the beginning of the 1980s. It has been proposed to answer the demand of new domain-specific language for the design of signal processing applications, adopting a dataflow and block-diagram style with array and sliding window operators. P. Le Guernic, A. Benveniste, and T. Gautier have been in charge of the language definition. The first paper on SIGNAL was published in 1982, while the first complete description of SIGNAL appeared in the PhD thesis of T. Gautier. The symbolic representation of SIGNAL via z/3z (over [-1,0,1]) has been introduced in 1986. A full compiler of SIGNAL based on the clock calculus on hierarchy of Boolean clocks, was described by L. Besnard in his PhD thesis in 1992. The clock calculus has been improved later by T. Amagbegnon with the proposition of arborescent canonical forms. During the 1990s, the application domain of the SIGNAL language has been extended into general embedded and real-time systems. The relation-oriented specification style enabled the increasing construction of the systems, and also led to the design considering multi-clocked systems, compared to the original single-clock-based implementation of Esterel and Lustre. Moreover, the design and implementation of distributed embedded systems were also taken into account in SIGNAL. The corresponding research includes the optimization methods proposed by B. Chéron, the clustering models defined by B. Le Goff, the abstraction and separate compilation formalized by O. Maffeïs, and the implementation of distributed programs developed by P. Aubry. The Polychrony Toolsets The Polychrony toolset is an open-source development environment for critical/embedded systems based on SIGNAL, a real-time polychronous dataflow language. It provides a unified model-driven environment to perform design exploration by using top-down and bottom-up design methodologies formally supported by design model transformations from specification to implementation and from synchrony to asynchrony. It can be included in heterogeneous design systems with various input formalisms and output languages. Polychrony is a set of tools composed of: The SME environment The SME (SIGNAL Meta under Eclipse) environment is a front-end of Polychrony in the Eclipse environment based on Model-Driven Engineering (MDE) technologies. It consists of a set of Eclipse plug-ins which rely on the Eclipse Modeling Framework (EMF). The environment is built around SME, a metamodel of the SIGNAL language extended with mode automata concepts. The SME environment is composed of several plug-ins which correspond to: See also Notes and references External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sheba] | [TOKENS: 7295] |
Contents Sheba Zurayids (266–316) Hamdanids (304–439) Mahdids (304–439) Tahirid state (266–316) Kathiri State (304–439) South Yemen (1967–1990) North Yemen (1962–1990) Sheba,[a] or Saba,[b] was an ancient South Arabian kingdom that existed in Yemen before 275 CE. It likely began to exist between c. 1000 BCE and c. 800 BCE. Its inhabitants were the Sabaeans,[c] who, as a people, were indissociable from the kingdom itself for much of the 1st millennium BCE. Modern historians agree that the heartland of the Sabaean civilization was located in the region around Marib and Sirwah. In some periods, they expanded to much of modern Yemen and even parts of the Horn of Africa, particularly Eritrea and Ethiopia. The kingdom's native language was Sabaic, which was a variety of Old South Arabian. Among South Arabians and Abyssinians, Sheba's name carried prestige, as it was widely considered to be the birthplace of South Arabian civilization as a whole. The first Sabaean kingdom lasted from the 8th century BCE to the 1st century BCE: this kingdom can be divided into the "mukarrib" period, where it reigned supreme over all of South Arabia; and the "kingly" period, a long period of decline to the neighbouring kingdoms of Ma'in, Hadhramaut, and Qataban, ultimately ending when a newer neighbour, Himyar, annexed them. Sheba was originally confined to the region of Marib (its capital city) and its surroundings. At its height, it encompassed much of the southwestern parts of the Arabian Peninsula before eventually declining to the regions of Marib. However, it re-emerged from the 1st to 3rd centuries CE. During this time, a secondary capital was founded at Sanaa, which is also the capital city of modern Yemen. Around 275 CE, the Sabaean civilization came to a permanent end in the aftermath of another Himyarite annexation. The Sabaeans, like the other South Arabian kingdoms of their time, took part in the extremely lucrative spice trade, especially including frankincense and myrrh. They left behind many inscriptions in the monumental Ancient South Arabian script, as well as numerous documents in the related cursive Zabūr script. Their interaction with African societies in the Horn is attested by numerous traces, including inscriptions and temples dating back to the Sabaean presence in Africa. The Hebrew Bible references the kingdom in an account describing the interactions between King Solomon of Israel and a figure identified as the Queen of Sheba. The Hebrew Bible's account is considered legendary. A similar narrative is also found in the Quran (Sheba is distinct from the Sabians). Traditions concerning the legacy of the Queen of Sheba feature extensively in Ethiopian Christianity, particularly Orthodox Tewahedo, and among Yemenis today. She is left unnamed in Jewish tradition, but is known as Makeda in Ethiopian tradition and as Bilqis in Arab and Islamic tradition. According to the Jewish historian Josephus, Sheba was the home of Princess Tharbis, a Cushite who is said to have been the wife of Moses before he married Zipporah. Some Quranic exegetes identified Sheba with the People of Tubba. Sources The Sabaic language was written down in the Sabaic script as early as the 11th or 10th centuries BCE. The Sabaic tradition has left behind a sizable epigraphic record. Of the 12,000 corresponding Ancient South Arabian inscriptions, 6,500 are in Sabaic. The region first sees a continuous record of epigraphic documentation in the 8th century BCE, which lasts until the 9th century CE, long after the fall of the Sabaean kingdom and covering a time range of about a millennium and a half and constituting the main source of information about the Sabaeans. South Arabian civilization may be the only civilization that can be reconstructed from epigraphic evidence. External information about the Sabaeans comes first from Akkadian cuneiform texts starting in the 8th century BCE. Less important are brief reports from the Bible about correspondence between Solomon and the Queen of Sheba. The story is considered legendary as early first-millennium BC epigraphic sources show no evidence of diplomatic missions or women rulers. Knowledge of the Sabaeans as merchant peoples indicates that some level of trade between the regions was underway in this time. After the campaigns of Alexander the Great, South Arabia became a hub of trade routes linking the broader geopolitical realm with India. As such, information about the region begins to appear among Greco-Roman observers and information becomes more concrete. The most important accounts about South Arabia are from Eratosthenes, Strabo, Theophrastus, Pliny the Elder, an anonymous first-century seafarming manual called the Periplus of the Erythraean Sea concerning the politics and topography of South Arabian coasts, the Ecclesiastical History by Philostorgius, and Procopius. Scholars have noted that Sheba and ancient Judah/Palestine maintained trade, linguistic, and cultural contacts during antiquity. History The formative phase of the Sabaeans, or the period prior to the emergence of urban cultures in South Arabia, can be placed the latter part of the 2nd millennium BCE, and was completed by the 10th century BCE, where a fully developed script appears in combination with the technological prowess to construct complex architectural complexes and cities. There is some debate as to the degree to which the movement out of the formative phase was channeled by endogenous processes, or the transfer or technologies from other centers, perhaps via trade and immigration. Originally, the Sabaeans were part of "communities" (called shaʿbs) on the edge of the Sayhad desert. Very early, at the beginning of the 1st millennium BC, the political leaders of this tribal community managed to create a huge commonwealth of shaʿbs occupying most of South Arabian territory and took on the title "Mukarrib of the Sabaeans". The origin of the Sabaean Kingdom is uncertain and is a point of disagreement among scholars, with estimates placing it around 1200 BCE, by the 10th century BCE at the latest, or a period of flourishing that only begins from the 8th century BCE onwards. Once the polity had been established, Sabaean kings referred to themselves by the title Mukarrib. The first major phase of the Sabaean civilization lasted between the 8th and 1st centuries BCE. For centuries, Saba dominated the political landscape in South Arabia. The 8th century is when the first stone inscriptions appear, and when leaders are already being called by the title Mukarrib ("federator"). Due to this convention, this era can also be called the "Mukarrib period". The title mukarrib was more prestigious than that of mlk ("king") and was used to refer to someone that extended hegemony over other tribes and kingdoms. Saba reached the height of its powers between the 8th and 6th centuries BCE. In particular, through protracted warfare, Karib'il Watar carried out a series of conquests that extended Sabaean territory to Najran in the north, the Gulf of Aden in the southwest, and eastward from that point along the coast until the western foothills of the Hadhramaut plateau. Saba reigned supreme over South Arabia, and Karib'il established diplomatic contacts with the Assyrian emperor Sennacherib. This territorial range by a South Arabian kingdom would not be seen again until Himyar achieved it over 1,100 years later. Karib'il's success is reflected by the dynastic succession of four rulers from his lineage, including sons, grandson, and great-grandons, a rare occurrence in the face of the rarity of dynastic succession in ancient South Arabian culture. The next time this would be seen was six centuries later in Qataban. After the 6th century BCE, Saba was unable to maintain its supremacy over South Arabia in the face of the expanding adjacent powers of Qataban and Hadhramaut militarily, and Ma'in economically, leading it contract back to its core territory around Marib and Sirwah. Sabaean leaders reverted to use of the title malik ("king") instead of mukarrib. This decline began soon after the end of the reign of Karib'il Watar. While Karib'il established hegemony over the Jawf, his immediate successors only consolidated their power over some of its former city-states (including Nashq and Manhayat) whereas others (like Yathill and the towns of Wadhi Raghwan) were absorbed into Ma'in. Qataban expanded into the Southern Highlands, formerly under Sabaean rule. Economically, the first Sabaean period was dominated by a caravan economy that had market ties with the rest of the Near East. Its first major trading partners were at Khindanu and the Middle Euphrates. Later, this moved to Gaza during the Persian period, and finally, to Petra in Hellenistic times. The South Arabian deserts gave rise to important aromatics which were exported in trade, especially frankincense and myrrh. It also acted as an intermediary for overland trade with neighbours in Africa and further off from India. By the end of the 1st millennium BCE, several factors came together and brought about the decline of the Sabaean state and civilization. The biggest challenge came from the expansion of the Roman Republic. The Republic conquered Syria in 63 BCE and Egypt in 30 BCE, diverting Saba's overland trade network. The Romans then attempted to conquer Saba around 26/25 BCE with an army sent out under the command of the governor Aelius Gallus, setting Marib to siege. Due to heat exhaustion, the siege had to be quickly given up. However, after conquering Egypt, the overland trade network was redirected to maritime routes, with an intermediary port chosen with Bir Ali (then called Qani). This port was part of the Kingdom of Hadhramaut, far from Sabaean territory. Greatly economically weakened, the Kingdom of Saba was soon annexed by the Himyarite Kingdom, bringing this period to a close. After the disintegration of the first Himyarite Kingdom, the Sabaean Kingdom reappeared and began to vigorously campaign against the Himyarites, and it flourished for another century and a half. This resurgent kingdom was different from the earlier one in many important respects. The most significant change with the earlier Sabaean period is that local power dynamics had shifted from the oasis cities on the desert margin, like Marib, to the highland tribes. The Almaqah temple at Marib returned to being a religious center. Saba inaugurated a new coinage and the remarkable Ghumdan Palace was built at Sanaa which, in this period, had its status elevated to that of a secondary capital next to Marib. Despite liberating itself from Himyar by around 100 CE, leaders of Himyar continued calling themselves the "king of Saba", as they had been doing during the period in which they ruled the region, to assert their legitimacy over the territory. The Kingdom fell after a long but sporadic civil war between several Yemenite dynasties claiming kingship, and the late Himyarite Kingdom rose as victorious. Sabaean kingdom was finally permanently conquered by the Ḥimyarites around 275 CE. Saba lost its royal status and reverted to a normal tribe, limited to the citizens of Marib, who are named in the last time in South Arabian sources in CIH 541 in requesting assistance from the king in repairing a rupture in the Marib Dam. Conquests The major conquests in Saba were driven by the exploits of Karib'il Watar. Karib'il conquered all surrounding neighbours, including the Awsan, Qataban, and Hadhramaut. Karib'il's exploits largely unified Yemen. The conquests of Karib'il are documented in two lengthy inscriptions (RES 3945–3946) discovered at the Temple of Almaqah at Sirwah. These inscriptions describe a series of eight campaigns to show how Karib'il ultimately brought South Arabia under the control of Saba. The first campaign took place in the highlands west of Marib, where Karib'il declares that he had captured 8,000 and killed 3,000 enemies. The second campaign concerned the Kingdom of Awsan, which flourished in the 8th and 7th centuries BCE. Up until the reign of Karib'il, it was a significant regional competitor with the Kingdom of Saba. However, Karib'il's campaign brought about the obliteration of the Kingdom of Awsan. The tribal elite leading Awsan were slaughtered, and the palace of Murattaʿ was destroyed, as well as their temples and inscriptions. The wadi was depopulated, which is reflected in the abandonment of the wadi. Sabaean inscriptions claim that 16,000 were killed and 40,000 prisoners were taken. This may not have been a significant exaggeration, as the Awsan kingdom disappeared as a political entity from the historical record for five or six centuries. The third and fourth campaigns involve attacks against tribes living in low-lying hills that geographically face the Gulf of Aden. The fifth and sixth campaigns were against Nashshan. Nashshan was, like Awsan, one of Saba's most powerful competitors. However, against Karib'il, combined with the destruction of several towns and buildings and the imposition of a tribute on its people. Any dissidents were killed and the cult of Almaqah was imposed onto Nashshan, with Nashshan's leaders being required to build a temple for him. The final two campaigns were against the Tihamah coastal region and the Najran region. The role played by Sabaeans in the formation of Dʿmt (Di'amat), located in modern-day Ethiopia's Tigray Region and founded c. 800 BCE, continues to be debated by scholars.: 90–94 Evidence of strong Sabaean influence includes Sabaic inscriptions and Sabaean temples. Scholars of South Arabian archaeology and epigraphy tend to favour a migration and/or colonisation, while scholars of African archaeology tend to stress an indigenous origin. Sabaean populations migrated to maintain the new polity, and link it with the mother country, including through managing trade between the two (ivory might have especially been a driver of the expansion). The capital of the new kingdom was Yeha, where a great temple was built for Almaqah, the national god of Saba. Four other Almaqah temples are also known from Di'amat (including the Temple of Meqaber Gaʿewa), and other inscriptions mention the complete remainder of the known Sabaean deities. The great Yeha temple was modelled by Sabaean masons off of the Almaqah Temple at Sirwah (a major urban center of Saba). Besides religion, Sabaean culture also diffused into Di'amat through the use of objects, architectural techniques, artistic styles, institutions, paleographical styles for writings inscriptions, and the use of abstract symbols. Leaders in Di'amat used the classical South Arabian title, the mukarrib, and one particular title that is seen is the "Mukarrib of Diʿamat and Saba" (mkrb Dʿmt s-S1bʾ). The exact timing of the collapse of Di'amat is not known: it happened around the mid-1st millennium BCE and involved a destruction of Yeha along with a number of adjacent sites. This also happened when Saba was beginning to lose its grip on power over South Arabia. In 2019 Sabaean inscriptions were found in Somaliland and Puntland, as well as a Sabaean temple whose inscriptions say its construction was ordered by the admiral of Sheba's fleet. In 2025 Alfredo González-Ruibal [es] said "we can perhaps discern two different models: a proper colonialist one along the northern Somali seaboard, with direct intervention of the state and aimed at the extraction of resources, and a diasporic model in the northern Horn [where Dʿmt was located], led by élites who soon mixed with local people, while maintaining ties with their ancestral homeland". Military warfare continued between Saba, Ethiopia, and Himyar during the second Sabaean period, with a dynamic and shifting array of alliances. Recently discovered evidence shows that these encounters took place, not only on the peninsula, but also on Ethiopian territory during expeditions launched by the Sabaeans. Urban centers In the Kingdom of Saba, Marib was an oasis and one of the main urban centers of the kingdom. It was by far the largest ancient city from ancient South Arabia, if not its only real city. Marib was located at the precise point that the wadi (of Wadi Dhana) emerges from the Yemeni highlands. It was located along what was called the Sayhad desert by medieval Arab geographers, but is now known as Ramlat al-Sab'atayn. The city lies 135 km east of Sanaa, which is the capital of Yemen today, found in the Wadi Dana delta, in the northwestern central Yemeni highlands. The oasis is about 10,000 hectares and the course of the wadi divides it into two: a northern and a southern half, which was already spoken of in records from the 8th century BCE, and this prominent feature may have been remembered as late as in the time of the Quran (34:15). A wall was built around Marib, and 4 km of that wall is still standing today. The wall, in some places, can be as much as 14 m thick. The wall encloses a 100-hectare area shaped like a trapezoid, and the settlement appears to have been created in the late second millennium BCE. Archaeological inquiries have uncovered a settlement plan that allocated different areas for different tasks. There is one residential division to the city. Another division containing sacred buildings but no residential development was probably a storage area for trade caravans and the shipment of goods. Immediately to the west was the great city temple Harun, dedicated to the national Sabaean god, Almaqah. A processional road, known from inscriptions but not yet discovered, led from the Harun temple to the Temple of Awwam, 3.5 km to the southeast of Marib, which is both the main temple for the god Almaqah in the Kingdom of Saba and the largest temple complex known from South Arabia. Hundreds of inscriptions are known from the Awwam Temple, and these documents form the basis from which the political history of South Arabia thus far reconstructable from in the first few centuries of the Christian era. The enclosure was built in the 7th century BCE according to a monumental inscription from the time of Yada'il Darih. South of the temple wall is a 1.5-hectare necropolis, in which it is estimated that about 20,000 people have been buried over a time period covering about a millennium. Shortly west of the Awwam Temple is another major temple in the southern oasis dedicated to Almaqah, which has been fully excavated and is the best studied temple to date from South Arabia: the Barran Temple. It is evident that predecessors to the Barran Temple went back to the 10th century BCE. The construction history is properly documented by inscriptions in the area. The temple was destroyed shortly before the beginning of the Christian eraddea. The exact cause is unknown, but it may have been linked to an (ultimately unsuccessful) siege of South Arabia by the Romans, under the leadership of the governor Aelius Gallus in 25/24 BCE. Inscriptions attest other temples dedicated to other gods but these have not yet been discovered archaeologically. The Marib Dam was one of the most well-known architectural complexes from Yemen, and was even mentioned in the Quran (34:16), and this construction made it possible to irrigate the 10,000 hectares of the Marib oasis. The dam is located 10 km west of the main settlement. The dam successfully delegates and distributes water from the biannual monsoon rains into two main channels, which move away from the wadi and into fields through a highly dispersive system. This allowed the region to convert alluvial loads into fertile soils and so cultivate various crops. It took until the 6th century BCE for the full closure to be accomplished. The system required constant maintenance, and two major dam failures are reported from 454/455 and 547 CE. However, as political authority weakened over the course of the 6th century CE, maintenance efforts could not be sustained. The dam was therefore breached and the oasis was temporarily abandoned by the early seventh century. The second Sabaean urban center was Sirwah. The two cities are connected by an ancient road. A wall had been built around Sirwah by the 10th century BCE. Much smaller than Marib, the city of Sirwah is 3.8 hectares in size, but it is archaeologically well-understood. The main buildings at the site are administrative and sacred buildings. Some buildings demonstrate that Sirwah acted as a transshipment point for trade goods. Legal documents show that Sirwah engaged in trade with Qataban to the southeast and the highlands around Sanaa to the west. Despite the urban area being limited, a significant portion of the space was allocated to sacred buildings. This has led some people to think that Sirwah acted as a religious center. The Great Temple of Almaqah is the most notable one, besides which, four other sacred buildings are known. One of these buildings was probably devoted to the female deity Atarsamain. Yada'il Darih, already a temple builder at the Awwam Temple in Marib, also fundamentally remodelled the Alwaqah Temple in the mid-7th century BCE. Inside the temple, in the area that is most cultically important, stands two parallel monumental inscriptions recording the lifetime achievements of two rulers: Yatha' Amar Watar and Karib'il Watar, who reigned in the late 8th and early 7th centuries BCE. The description in these records begins with comments on sacrifices made to the Sabaean deities, and then mostly delve into military campaigns in meticulous detail. At the end, the inscriptions record purchases of cities, landscapes, and fields. Economy and trade The Sabaeans had a long history of seafaring and commerce. A Sabaean presence in Africa was noted in antiquity with the founding of the kingdom of Dʿmt in Ethiopia in the 8th century BCE. The 1st-century CE historian Periplus of the Erythraean Sea described how the Arabs controlled the coast of "Ezana" (the East African coast north of Somalia). The Quran mentions trade with Sheba: "And We placed between them and the cities which We had blessed [many] visible cities. And We determined between them the [distances of] journey, [saying], "Travel between them by night or day in safety." The Old Testament Book of Ezekiel reads, "Dedan traded in saddle blankets with you. Arabia and all the princes of Kedar were your customers; they did business with you in lambs, rams and goats. ‘The merchants of Sheba and Raamah traded with you; for your merchandise they exchanged the finest of all kinds of spices and precious stones, and gold." The Chinese explorer Faxian, who passed through Sri Lanka in 414 CE, reported that Saebaean merchants and Arabs from Oman and Hadhramaut lived in ornate homes in settlements on the island[clarification needed] and traded in timber. Society Limitations in the available evidence prevent a full reconstruction of the full religious world in Ancient South Arabian kingdoms. While many of the known inscriptions speak about gods, most only hand down the name of the divinity without describing its nature, function, or cult. It is not known, for example, if these kingdoms had a god of war or a god of the underworld. Familial relationships between the gods are frequently mentioned, however. Saba had five gods of its pantheon: Almaqah, Athtar, Haubas, Dhat-Himyam, and Dhat-Badan. The first three are male, and the last two are female. The high god of the pantheon, and the national god of Saba, was Almaqah, whose worship was centered at the Temple of Awwam. Military victory helped spread this cult, such as when a temple to Almaqah was built in Nashshan after being conquered by Saba. The mention of Almaqah in the Jawf also indicates the political role played by Saba in that valley. The nature of the god is not entirely clear, but Almaqah has been hypothesized to be a moon god by some researchers. Athtar was not limited to Saba, but was instead the common god of the South Arabian pantheon during its polytheistic era. Athtar was also once the great god of the Sabaean pantheon, before being supplanted by Almaqah. Generally however, South Arabian deities are region-specific and lack parallel elsewhere in the Near East. Anthropomorphic representations of the gods are lacking entirely from the Old Sabaean period, and only begin to appear with the onset of Hellenistic and Roman influences at the turn of the Christian era. Ancient South Arabian kings built great public works, had special ties with the gods legitimated through rites only they could perform, and led their armies during battle. They are represented as brave warriors, pious worshippers, and active builders. The fathers of the king is rarely attested independently. The function of the king was distinct from the role of the sheikh. The Geographica by Strabo claims that in the region, the succession of kings was not familial, a claim that is partly confirmed by inscriptions. South Arabian kings did not appeal to their genealogy or the accomplishments of their fathers to legitimate their own rule. Only late in Sabaean history, from the second half of the 2nd century CE, did a real dynastic succession from father to son appear, and it only lasted for two generations. The Sabaean king was called the mukarrib ("federator") more often than the malik ("king") between the 8th and 6th centuries BCE, to indicate their hegemony over their neighbours. When Saba declined after the 6th centuries BCE and Sabaean territory contracted to what it was prior to the conquests of Karib'il Watar, the title mukarrib is replaced by that of malik. In the early centuries of Saba, the title of the king was a combination of a name and an epithet. All titles were chosen from a combination of six possible names (Dhamar'ali, Karib'il, Sumhu'alay, Yada"il, Yakrubmalik and Yitha'amar) and four possible epithets (Bayan, Dharih, Watar and Yanu). The repetitiveness of names has caused difficulties for historians trying to determine the relative succession of kings (even when they are attested) and raises questions about what the personal names were of each king. A similar practice took place in the neighbouring Kingdom of Hadhramaut. In the centuries leading up to the Christian era, this changed, kings began identifying with their real name, and reconstructions of Sabaean chronology become simpler. Accession to the Sabaean throne required the consent of "the Sabaeans, the qayls and the army" in one inscription. The legislative body extended beyond the king, including other functionaries. The Sabaean monarchs did not implement taxes but derived their wealth from royal lands, war boody, and rent from clients. Military service could be compelled and financial requests could be made for the purpose of funding construction work. Any tithes on temple lands went to the temples themselves, not the monarch. The king of Saba was not deified. The only known case of deification from ancient South Arabian cultures is from the Kingdom of Awsan during its resurgent phase. In the South Arabian tribal system, a fictitious shared ancestor was created and members of the tribe are referred to as the sons of the national god (in the case of Saba, they are "sons of Almaqah"). Allied states and tribes are called "brothers". Tribes were divided into lineages and sub-lineages, reflected by the names of members. The individual proper name appears along with the patronymic, the lineage name, and the name of the tribe, with the exception of funerary inscriptions, where the individual name is attested alone. In areas closer to the desert, the family name was more privileged and commonly mentioned, with the tribal name becoming less mentioned. Personal identity only went back to the name of the father, unlike in North Arabia in the same time period or the later Islamic period where a long sequence of ancestors is used to identify a figure. Identity was also in reference to the kingdom that one belonged to (Sabaeans, Qatabanians), not to a broader geographical construct (like "South Arabian"). Culture Sabaic was the spoken language of the Kingdom of Saba. Geographically, Sabaic was spoken in Saba, just as Qatabanic was spoken in Qataban and Hadraumitic was spoken in Hadhramaut. The only exception to this is Minaic, which is attested well-beyond the geographical territory of its corresponding kingdom, Ma'in. These four languages share and are distinguished by a number of linguistic features. The documentation for Sabaic is the best of any language of Ancient South Arabian, attested in all phases of the history of Saba. The South Arabian kingdoms had writing schools with a common cultural background, although each school also had distinct practices. Legacy Saba appears in the Hebrew Bible, also the first place that the Sabaeans are mentioned by an external source. Most famously, Saba is presented as, through its female monarch the Queen of Sheba, engaging in trade with Solomon in goods of aromatics and gold. Historians have subjected this story to questions concerning its historicity. The Hebrew Bible links the Sabaean caravan trading network with other cities including Dedan, Tayma, and Ra'mah. The story of the visit of the Queen of Sheba to Solomon is discussed in Quran 27:15–44. The name of Saba' is mentioned in the Qur'an in Surah 5:69, Surah 27:15-44 and Surah 34:15-17. Surah 34 is named Sabaʾ. Their mention in Surah 5 refers to the area in the context of Solomon and the Queen of Sheba, whereas their mention in Surah 34 refers to the Flood of the Dam, in which the dam was ruined by flooding. There is also an epithet, Qawm Tubbaʿ or "People of Tubbaʿ" (Surah 44:37, Surah 50:12-14) that some exegetes have identified as a reference to the kings of Saba'. Muslim commentators such as al-Tabari, al-Zamakhshari, al-Baydawi supplement the story at various points. The Queen's name is given as Bilqis, probably derived from Greek παλλακίς or the Hebraised pilegesh, "concubine". According to some he then married the Queen, while other traditions assert that he gave her in marriage to a tubba of Hamdan. According to the Islamic tradition as represented by al-Hamdani, the queen of Sheba was the daughter of Ilsharah Yahdib, the Himyarite king of Najran. Although the Quran and its commentators have preserved the earliest literary reflection of the complete Bilqis legend, there is little doubt among scholars that the narrative is derived from a Jewish Midrash. Bible stories of the Queen of Sheba and the ships of Ophir served as a basis for legends about the Israelites traveling in the Queen of Sheba's entourage when she returned to her country to bring up her child by Solomon. There is a Muslim tradition that the first Jews arrived in Yemen at the time of King Solomon, following the politico-economic alliance between him and the Queen of Sheba. The Ottoman scholar Mahmud al-Alusi compared the religious practices of South Arabia to Islam in his Bulugh al-'Arab fi Ahwal al-'Arab. The Arabs during the pre-Islamic period used to practice certain things that were included in the Islamic Sharia. They, for example, did not marry both a mother and her daughter. They considered marrying two sisters simultaneously to be the most heinous crime. They also censured anyone who married his stepmother, and called him dhaizan. They made the major hajj and the minor umra pilgrimage to the Ka'ba, performed the circumambulation around the Ka'ba tawaf, ran seven times between Mounts Safa and Marwa sa'y, threw rocks and washed themselves after sexual intercourse. They also gargled, sniffed water up into their noses, clipped their fingernails, removed all pubic hair and performed ritual circumcision. Likewise, they cut off the right hand of a thief and stoned Adulterers. According to the medieval religious scholar al-Shahrastani, Sabaeans accepted both the sensible and intelligible world. They did not follow religious laws but centered their worship on spiritual entities. In the medieval Ethiopian cultural work called the Kebra Nagast, Sheba was located in Ethiopia. Some scholars therefore point to a region in the northern Tigray and Eritrea which was once called Saba (later called Meroe), as a possible link with the biblical Sheba. Donald N. Levine links Sheba with Shewa (the province where modern Addis Ababa is located) in Ethiopia. Traditional Yemenite genealogies also mention Saba, son of Qahtan; Early Islamic historians identified Qahtan with the Yoqtan (Joktan) son of Eber (Hūd) in the Hebrew Bible (Gen. 10:25-29). James A. Montgomery finds it difficult to believe that Qahtan was the biblical Joktan based on etymology. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/ATmega] | [TOKENS: 6211] |
Contents AVR microcontrollers AVR is a family of microcontrollers developed since 1996 by Atmel, acquired by Microchip Technology in 2016. They are 8-bit RISC single-chip microcontrollers based on a modified Harvard architecture. AVR was one of the first microcontroller families to use on-chip flash memory for program storage, as opposed to one-time programmable ROM, EPROM, or EEPROM used by other microcontrollers at the time. AVR microcontrollers are used numerously as embedded systems. They are especially common in hobbyist and educational embedded applications, popularized by their inclusion in many of the Arduino line of open hardware development boards. The AVR 8-bit microcontroller architecture was introduced in 1997. By 2003, Atmel had shipped 500 million AVR flash microcontrollers. History The AVR architecture was conceived by two students at the Norwegian Institute of Technology (NTH), Alf-Egil Bogen and Vegard Wollan. Atmel says that the name AVR is not an acronym and does not stand for anything in particular. The creators of the AVR give no definitive answer as to what the term "AVR" stands for. However, it is commonly accepted that AVR stands for Alf and Vegard's RISC processor. Note that the use of "AVR" in this article generally refers to the 8-bit RISC line of Atmel AVR microcontrollers. The original AVR MCU was developed at a local ASIC design company in Trondheim, Norway, called Nordic VLSI at the time, now Nordic Semiconductor, where Bogen and Wollan were working as students.[citation needed] It was known as a μRISC (Micro RISC) and was available as silicon IP/building block from Nordic VLSI. When the technology was sold to Atmel from Nordic VLSI, the internal architecture was further developed by Bogen and Wollan at Atmel Norway, a subsidiary of Atmel. The designers worked closely with compiler writers at IAR Systems to ensure that the AVR instruction set provided efficient compilation of high-level languages. Among the first of the AVR line was the AT90S8515, which in a 40-pin DIP package has the same pinout as an 8051 microcontroller, including the external multiplexed address and data bus. The polarity of the RESET line was opposite (8051's having an active-high RESET, while the AVR has an active-low RESET), but other than that the pinout was identical. The Arduino platform, developed for simple electronics projects, was released in 2005 and featured ATmega8 AVR microcontrollers. Device overview The AVR is a modified Harvard architecture machine, where program and data are stored in separate physical memory systems that appear in different address spaces, but having the ability to read data items from program memory using special instructions. AVRs are generally classified into following: tinyAVR – the ATtiny series The ATtiny series features small package microcontrollers with a limited peripheral set available. However, the improved tinyAVR 0/1/2-series (released in 2016) include: megaAVR – the ATmega series The ATmega series features microcontrollers that provide an extended instruction set (multiply instructions and instructions for handling larger program memories), an extensive peripheral set, a solid amount of program memory, as well as a wide range of pins available. The megaAVR 0-series (released in 2016) also has functionality such as: AVR Dx – The AVR Dx family features multiple microcontroller series, focused on HCI, analog signal conditioning and functional safety. The parts numbers is formatted as AVRffDxpp, where ff is flash size, x is family, and pp is number of pins. Example: AVR128DA64 – 64-pin DA-series with 128k flash. All devices in the AVR Dx family include: XMEGA the ATxmega series offers a wide variety of peripherals and functionality such as: Application-specific AVR FPSLIC (AVR with FPGA) 32-bit AVRs The AVRs have 32 single-byte registers and are classified as 8-bit RISC devices. Flash, EEPROM, and SRAM are all integrated onto a single chip, removing the need for external memory in most applications. Some devices have a parallel external bus option to allow adding additional data memory or memory-mapped devices. Almost all devices (except the smallest TinyAVR chips) have serial interfaces, which can be used to connect larger serial EEPROMs or flash chips. Program instructions are stored in non-volatile flash memory. Although the MCUs are 8-bit, each instruction takes one or two 16-bit words. The size of the program memory is usually indicated in the naming of the device itself (e.g., the ATmega64x line has 64 KB of flash, while the ATmega32x line has 32 KB). There is no provision for off-chip program memory; all code executed by the AVR core must reside in the on-chip flash. However, this limitation does not apply to the AT94 FPSLIC AVR/FPGA chips. The data address space consists of the register file, I/O registers, and SRAM. Some small models also map the program ROM into the data address space, but larger models do not. In the tinyAVR and megaAVR variants of the AVR architecture, the working registers are mapped in as the first 32 data memory addresses (000016–001F16), followed by 64 I/O registers (002016–005F16). In devices with many peripherals, these registers are followed by 160 “extended I/O” registers, only accessible as memory-mapped I/O (006016–00FF16). Actual SRAM starts after these register sections, at address 006016 or, in devices with "extended I/O", at 010016. Even though there are separate addressing schemes and optimized opcodes for accessing the register file and the first 64 I/O registers, all can also be addressed and manipulated as if they were in SRAM. The very smallest of the tinyAVR variants use a reduced architecture with only 16 registers (r0 through r15 are omitted) which are not addressable as memory locations. I/O memory begins at address 000016, followed by SRAM. In addition, these devices have slight deviations from the standard AVR instruction set. Most notably, the direct load/store instructions (LDS/STS) have been reduced from 2 words (32 bits) to 1 word (16 bits), limiting the total direct addressable memory (the sum of both I/O and SRAM) to 128 bytes. Conversely, the indirect load instruction's (LD) 16-bit address space is expanded to also include non-volatile memory such as Flash and configuration bits; therefore, the Load Program Memory (LPM) instruction is unnecessary and omitted. (For detailed info, see Atmel AVR instruction set.) In the XMEGA variant, the working register file is not mapped into the data address space; as such, it is not possible to treat any of the XMEGA's working registers as though they were SRAM. Instead, the I/O registers are mapped into the data address space starting at the very beginning of the address space. Additionally, the amount of data address space dedicated to I/O registers has grown substantially to 4096 bytes (000016–0FFF16). As with previous generations, however, the fast I/O manipulation instructions can only reach the first 64 I/O register locations (the first 32 locations for bitwise instructions). Following the I/O registers, the XMEGA series sets aside a 4096 byte range of the data address space, which can be used optionally for mapping the internal EEPROM to the data address space (100016–1FFF16). The actual SRAM is located after these ranges, starting at 200016. Each GPIO port on a tiny or mega AVR drives up to eight pins and is controlled by three 8-bit registers: DDRx, PORTx and PINx, where x is the port identifier. Newer ATtiny AVRs, like ATtiny817 and its siblings, have their port control registers somewhat differently defined. xmegaAVR have additional registers for push/pull, totem-pole and pullup configurations. Almost all AVR microcontrollers have internal EEPROM for semi-permanent data storage. Like flash memory, EEPROM can maintain its contents when electrical power is removed. In most variants of the AVR architecture, this internal EEPROM memory is not mapped into the MCU's addressable memory space. It can only be accessed the same way an external peripheral device is, using special pointer registers and read/write instructions, which makes EEPROM access much slower than other internal RAM. However, some devices in the SecureAVR (AT90SC) family use a special EEPROM mapping to the data or program memory, depending on the configuration. The XMEGA family also allows the EEPROM to be mapped into the data address space. Since the number of writes to EEPROM is limited – Atmel specifies 100,000 write cycles in their datasheets – a well designed EEPROM write routine should compare the contents of an EEPROM address with desired contents and only perform an actual write if the contents need to be changed. Atmel's AVRs have a two-stage, single-level pipeline design, meaning that the next machine instruction is fetched as the current one is executing. Most instructions take just one or two clock cycles, making AVRs relatively fast among eight-bit microcontrollers. The AVR processors were designed with the efficient execution of compiled C code in mind and have several built-in pointers for the task. The AVR instruction set is more orthogonal than those of most eight-bit microcontrollers, in particular the 8051 clones and PIC microcontrollers with which AVR has competed. However, it is not completely regular: Some chip-specific differences affect code generation. Code pointers (including return addresses on the stack) are two bytes long on chips with up to 128 KB of flash memory, but three bytes long on larger chips; not all chips have hardware multipliers; chips with over 8 KB of flash have branch and call instructions with longer ranges; and so forth. The mostly regular instruction set makes C (and even Ada) compilers fairly straightforward and efficient. GCC has included AVR support for quite some time, and that support is widely used. LLVM also has rudimentary AVR support. In fact, Atmel solicited input from major developers of compilers for small microcontrollers, to determine the instruction set features that were most useful in a compiler for high-level languages. The AVR line can normally support clock speeds from 0 to 20 MHz, with some devices reaching 32 MHz. Lower-powered operation usually requires a reduced clock speed. All recent (Tiny, Mega, and Xmega, but not 90S) AVRs feature an on-chip oscillator, removing the need for external clocks or resonator circuitry. Some AVRs also have a system clock prescaler that can divide down the system clock by up to 1024. This prescaler can be reconfigured by software during run-time, allowing the clock speed to be optimized. Since all operations (excluding multiplication and 16-bit add/subtract) on registers R0–R31 are single-cycle, the AVR can achieve up to 1 MIPS per MHz, i.e. an 8 MHz processor can achieve up to 8 MIPS. Loads and stores to/from memory take two cycles, branching takes two cycles. Branches in the latest "3-byte PC" parts such as ATmega2560 are one cycle slower than on previous devices. AVRs have a large following due to the free and inexpensive development tools available, including reasonably priced development boards and free development software. The AVRs are sold under various names that share the same basic core, but with different peripheral and memory combinations. Compatibility between chips in each family is fairly good, although I/O controller features may vary. The Atmel AVR GNU C/C++ cross compiler, "avr-gcc" and "avr-g++", is used in both WinAVR and Atmel Studio. The Arduino team borrowed from WinAVR for the Windows version of the Arduino software. See external links for sites relating to AVR development. AVRs offer a wide range of features: Programming interfaces There are many means to load program code into an AVR chip. The methods to program AVR chips varies from AVR family to family. Most of the methods described below use the RESET line to enter programming mode. In order to avoid the chip accidentally entering such mode, it is advised to connect a pull-up resistor between the RESET pin and the positive power supply. The in-system programming (ISP) programming method is functionally performed through SPI, plus some twiddling of the Reset line. As long as the SPI pins of the AVR are not connected to anything disruptive, the AVR chip can stay soldered on a PCB while reprogramming. All that is needed is a 6-pin connector and programming adapter. This is the most common way to develop with an AVR. The Atmel-ICE device or AVRISP mkII (Legacy device) connects to a computer's USB port and performs in-system programming using Atmel's software. AVRDUDE (AVR Downloader/UploaDEr) runs on Linux, FreeBSD, Windows, and Mac OS X, and supports a variety of in-system programming hardware, including Atmel AVRISP mkII, Atmel JTAG ICE, older Atmel serial-port based programmers, and various third-party and "do-it-yourself" programmers. The Program and Debug Interface (PDI) is an Atmel proprietary interface for external programming and on-chip debugging of XMEGA devices. The PDI supports high-speed programming of all non-volatile memory (NVM) spaces; flash, EEPROM, fuses, lock-bits and the User Signature Row. This is done by accessing the XMEGA NVM controller through the PDI interface, and executing NVM controller commands. The PDI is a 2-pin interface using the Reset pin for clock input (PDI_CLK) and a dedicated data pin (PDI_DATA) for input and output. The Unified Program and Debug Interface (UPDI) is a one-wire interface for external programming and on-chip debugging of newer ATtiny and ATmega devices. UPDI chips can be programmed by an Atmel-ICE, a PICkit 4, an Arduino (flashed with jtag2updi), or though a UART (with a 1 kΩ resistor between the TX and RX pins) controlled by Microchip's Python utility pymcuprog. High-voltage serial programming (HVSP) is mostly the backup mode on smaller AVRs. An 8-pin AVR package does not leave many unique signal combinations to place the AVR into a programming mode. A 12-volt signal, however, is something the AVR should only see during programming and never during normal operation. The high voltage mode can also be used in some devices where the reset pin was disabled by fuses. High-voltage parallel programming (HVPP) is considered the "final resort" and may be the only way to correct bad fuse settings on an AVR chip. Most AVR models can reserve a bootloader region, 256 bytes to 4 KB, where re-programming code can reside. At reset, the bootloader runs first and does some user-programmed determination whether to re-program or to jump to the main application. The code can re-program through any interface available, or it could read an encrypted binary through an Ethernet adapter like PXE. Atmel has application notes and code pertaining to many bus interfaces. The AT90SC series of AVRs are available with a factory mask-ROM for program memory, instead of flash. Because of the large up-front cost and minimum order quantity, a mask-ROM is only cost-effective for high-production runs. aWire is a new one-wire debug interface available on the new UC3L AVR32 devices. Debugging interfaces The AVR offers several options for debugging, mostly involving on-chip debugging while the chip is in the target system. debugWIRE is Atmel's solution for providing on-chip debug capabilities via a single microcontroller pin. It is useful for lower pin-count parts which cannot provide the four "spare" pins needed for JTAG. The JTAGICE mkII, mkIII and the AVR Dragon support debugWIRE. debugWIRE was developed after the original JTAGICE release, and now clones support it. The Joint Test Action Group (JTAG) feature provides access to on-chip debugging functionality while the chip is running in the target system. JTAG allows accessing internal memory and registers, setting breakpoints on code, and single-stepping execution to observe system behaviour. Atmel provides a series of JTAG adapters for the AVR: JTAG can also be used to perform a boundary scan test, which tests the electrical connections between AVRs and other boundary scan capable chips in a system. Boundary scan is well-suited for a production line, while the hobbyist is probably better off testing with a multimeter or oscilloscope. Development tools and evaluation kits Official Atmel AVR development tools and evaluation kits contain a number of starter kits and debugging tools with support for most AVR devices: The STK600 starter kit and development system is an update to the STK500. The STK600 uses a base board, a signal routing board, and a target board. The base board is similar to the STK500, in that it provides a power supply, clock, in-system programming, an RS-232 port and a CAN (Controller Area Network, an automotive standard) port via DE9 connectors, and stake pins for all of the GPIO signals from the target device. The target boards have ZIF sockets for DIP, SOIC, QFN, or QFP packages, depending on the board. The signal routing board sits between the base board and the target board, and routes the signals to the proper pin on the device board. There are many different signal routing boards that could be used with a single target board, depending on what device is in the ZIF socket. The STK600 allows in-system programming from the PC via USB, leaving the RS-232 port available for the target microcontroller. A 4 pin header on the STK600 labeled 'RS-232 spare' can connect any TTL level USART port on the chip to an onboard MAX232 chip to translate the signals to RS-232 levels. The RS-232 signals are connected to the RX, TX, CTS, and RTS pins on the DB-9 connector. The STK500 starter kit and development system features ISP and high voltage programming (HVP) for all AVR devices, either directly or through extension boards. The board is fitted with DIP sockets for all AVRs available in DIP packages. STK500 Expansion Modules: Several expansion modules are available for the STK500 board: The STK200 starter kit and development system has a DIP socket that can host an AVR chip in a 40, 20, or 8-pin package. The board has a 4 MHz clock source, 8 light-emitting diode (LED)s, 8 input buttons, an RS-232 port, a socket for a 32 KB SRAM and numerous general I/O. The chip can be programmed with a dongle connected to the parallel port. The Atmel ICE is the currently supported inexpensive tool to program and debug all AVR devices (unlike the AVRISP/AVRISP mkII, Dragon, etc. discussed below). It connects to and receives power from a PC via USB, and supports JTAG, PDI, aWire, debugWIRE, SPI, SWD, TPI, and UPDI (the Microchip Unified Program and Debug Interface) interfaces. The ICE can program and debug all AVRs via the JTAG interface, and program with additional interfaces as supported on each device: Target operating voltage ranges of 1.62V to 5.5V are supported as well as the following clock ranges: The ICE is supported by the Microchip Studio IDE, as well as a command line interface (atprogram). The Atmel-ICE supports a limited implementation of the Data Gateway Interface (DGI) when debugging and programming features are not in use. The Data Gateway Interface is an interface for streaming data from a target device to the connected computer. This is meant as a useful adjunct to the unit to allow for demonstration of application features and as an aid in application level debugging. The AVRISP and AVRISP mkII are inexpensive tools allowing all AVRs to be programmed via ICSP. The AVRISP connects to a PC via a serial port and draws power from the target system. The AVRISP allows using either of the "standard" ICSP pinouts, either the 10-pin or 6-pin connector. The AVRISP mkII connects to a PC via USB and draws power from USB. LEDs visible through the translucent case indicate the state of target power. As the AVRISP mkII lacks driver/buffer ICs, it can have trouble programming target boards with multiple loads on its SPI lines. In such occurrences, a programmer capable of sourcing greater current is required. Alternatively, the AVRISP mkII can still be used if low-value (~150 ohm) load-limiting resistors can be placed on the SPI lines before each peripheral device. Both the AVRISP and the AVRISP mkII are now discontinued, with product pages removed from the Microchip website. As of July 2019 the AVRISP mkII is still in stock at a number of distributors. There are also a number of 3rd party clones available. The Atmel Dragon is an inexpensive tool which connects to a PC via USB. The Dragon can program all AVRs via JTAG, HVP, PDI, or ICSP. The Dragon also allows debugging of all AVRs via JTAG, PDI, or debugWire; a previous limitation to devices with 32 KB or less program memory has been removed in AVR Studio 4.18. The Dragon has a small prototype area which can accommodate an 8, 28, or 40-pin AVR, including connections to power and programming pins. There is no area for any additional circuitry, although this can be provided by a third-party product called the "Dragon Rider". The JTAG In Circuit Emulator (JTAGICE) debugging tool supports on-chip debugging (OCD) of AVRs with a JTAG interface. The original JTAGICE (sometimes retroactively referred to as JTAGICE mkI) uses an RS-232 interface to a PC and can only program AVRs with a JTAG interface. The JTAGICE mkI is no longer in production, however it has been replaced by the JTAGICE mkII. The JTAGICE mkII debugging tool supports on-chip debugging (OCD) of AVRs with SPI, JTAG, PDI, and debugWIRE interfaces. The debugWire interface enables debugging using only one pin (the Reset pin), allowing debugging of applications running on low pin-count microcontrollers. The JTAGICE mkII connects using USB, but there is an alternate connection via a serial port, which requires using a separate power supply. In addition to JTAG, the mkII supports ISP programming (using 6-pin or 10-pin adapters). Both the USB and serial links use a variant of the STK500 protocol. The JTAGICE3 updates the mkII with more advanced debugging capabilities and faster programming. It connects via USB and supports the JTAG, aWire, SPI, and PDI interfaces. The kit includes several adapters for use with most interface pinouts. The AVR ONE! is a professional development tool for all Atmel 8-bit and 32-bit AVR devices with On-Chip Debug capability. It supports SPI, JTAG, PDI, and aWire programming modes and debugging using debugWIRE, JTAG, PDI, and aWire interfaces. The very popular AVR Butterfly demonstration board is a self-contained, battery-powered computer running the Atmel AVR ATmega169V microcontroller. It was built to show off the AVR family, especially a then new built-in LCD interface. The board includes the LCD screen, joystick, speaker, serial port, real time clock (RTC), flash memory chip, and both temperature and voltage sensors. Earlier versions of the AVR Butterfly also contained a CdS photoresistor; it is not present on Butterfly boards produced after June 2006 to allow RoHS compliance. The small board has a shirt pin on its back so it can be worn as a name badge. The AVR Butterfly comes preloaded with software to demonstrate the capabilities of the microcontroller. Factory firmware can scroll your name, display the sensor readings, and show the time. The AVR Butterfly also has a piezoelectric transducer that can be used to reproduce sounds and music. The AVR Butterfly demonstrates LCD driving by running a 14-segment, six alpha-numeric character display. However, the LCD interface consumes many of the I/O pins. The Butterfly's ATmega169 CPU is capable of speeds up to 8 MHz, but it is factory set by software to 2 MHz to preserve the button battery life. A pre-installed bootloader program allows the board to be re-programmed via a standard RS-232 serial plug with new programs that users can write with the free Atmel IDE tools. This small board, about half the size of a business card, is priced at slightly more than an AVR Butterfly. It includes an AT90USB1287 with USB On-The-Go (OTG) support, 16 MB of DataFlash, LEDs, a small joystick, and a temperature sensor. The board includes software, which lets it act as a USB mass storage device (its documentation is shipped on the DataFlash), a USB joystick, and more. To support the USB host capability, it must be operated from a battery, but when running as a USB peripheral, it only needs the power provided over USB. Only the JTAG port uses conventional 2.54 mm pinout. All the other AVR I/O ports require more compact 1.27 mm headers. The AVR Dragon can both program and debug since the 32 KB limitation was removed in AVR Studio 4.18, and the JTAGICE mkII is capable of both programming and debugging the processor. The processor can also be programmed through USB from a Windows or Linux host, using the USB "Device Firmware Update" protocols. Atmel ships proprietary (source code included but distribution restricted) example programs and a USB protocol stack with the device. LUFA is a third-party free software (MIT license) USB protocol stack for the USBKey and other 8-bit USB AVRs. The RAVEN kit supports wireless development using Atmel's IEEE 802.15.4 chipsets, for Zigbee and other wireless stacks. It resembles a pair of wireless more-powerful Butterfly cards, plus a wireless USBKey; and costing about that much (under $US100). All these boards support JTAG-based development. The kit includes two AVR Raven boards, each with a 2.4 GHz transceiver supporting IEEE 802.15.4 (and a freely licensed Zigbee stack). The radios are driven with ATmega1284p processors, which are supported by a custom segmented LCD driven by an ATmega3290p processor. Raven peripherals resemble the Butterfly: piezo speaker, DataFlash (bigger), external EEPROM, sensors, 32 kHz crystal for RTC, and so on. These are intended for use in developing remote sensor nodes, to control relays, or whatever is needed. The USB stick uses an AT90USB1287 for connections to a USB host and to the 2.4 GHz wireless links. These are intended to monitor and control the remote nodes, relying on host power rather than local batteries. A wide variety of third-party programming and debugging tools are available for the AVR. These devices use various interfaces, including RS-232, PC parallel port, and USB. Uses AVRs have been used in various automotive applications such as security, safety, powertrain and entertainment systems. Atmel has recently launched a new publication "Atmel Automotive Compilation" to help developers with automotive applications. Some current usages are in BMW, Daimler-Chrysler and TRW. The Arduino physical computing platform is based on an ATmega328 microcontroller (ATmega168 or ATmega8 in board versions older than the Diecimila). The ATmega1280 and ATmega2560, with more pinout and memory capabilities, have also been employed to develop the Arduino Mega platform. Arduino boards can be used with its language and IDE, or with more conventional programming environments (C, assembler, etc.) as just standardized and widely available AVR platforms. USB-based AVRs have been used in the Microsoft Xbox hand controllers. The link between the controllers and Xbox is USB. Numerous companies produce AVR-based microcontroller boards intended for use by hobbyists, robot builders, experimenters and small system developers including: Cubloc, gnusb, BasicX, Oak Micros, ZX Microcontrollers, and myAVR. There is also a large community of Arduino-compatible boards supporting similar users. Schneider Electric used to produce the M3000 Motor and Motion Control Chip, incorporating an Atmel AVR Core and an advanced motion controller for use in a variety of motion applications but this has been discontinued. FPGA clones With the growing popularity of FPGAs among the open source community, people have started developing open source processors compatible with the AVR instruction set. The OpenCores website lists the following major AVR clone projects: Other vendors In addition to the chips manufactured by Atmel, clones are available from LogicGreen Technologies. These parts are not exact clones – they have a few features not found in the chips they are "clones" of, and higher maximum clock speeds, but use SWD (Serial Wire Debug, a variant of JTAG from ARM) instead of ISP for programming, so different programming tools must be used. Microcontrollers using the ATmega architecture are being manufactured by NIIET in Voronezh, Russia, as part of the 1887 series of integrated circuits. This includes an ATmega128 under the designation 1887VE7T (Russian: 1887ВЕ7Т). References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Bloomberg_L.P.] | [TOKENS: 4527] |
Contents Bloomberg L.P. Bloomberg L.P. is an American privately held financial, software, data, and media company headquartered in Midtown Manhattan, New York City. It was co-founded by Michael Bloomberg in 1981, with Thomas Secunda, Duncan MacMillan, Charles Zegar, and a 12% ownership investment by Merrill Lynch. Bloomberg L.P. provides financial software tools and enterprise applications such as analytics and an equity trading platform, data services, and news to financial companies and organizations through the Bloomberg Terminal (via its Bloomberg Professional Service), its core revenue-generating product. Bloomberg L.P. also includes a news agency (Bloomberg News), a global television network (Bloomberg Television), websites, radio stations (Bloomberg Radio), subscription-only newsletters, and two magazines: Bloomberg Businessweek and Bloomberg Markets. As of 2019, the company has 176 locations and nearly 20,000 employees. History In 1981, Salomon Brothers was acquired, and Michael Bloomberg, a general partner, was given a $10 million partnership settlement. Bloomberg, having designed in-house computerized financial systems for Salomon, used his $10 million partnership buyout to start Innovative Market Systems (IMS). Bloomberg developed and built his own computerized system to provide real-time market data, financial calculations and other financial analytics to Wall Street firms. The Market Master terminal, later called the Bloomberg Terminal, was released to market in December 1982. Merrill Lynch became the first customer, purchasing 20 terminals and a 30% equity stake in the company for $30 million in exchange for a five-year restriction on marketing the terminal to Merrill Lynch's competitors. Merrill Lynch released IMS from this restriction in 1984. In 1986, the company renamed itself Bloomberg L.P. (limited partnership). Bloomberg launched Bloomberg Business News, later Bloomberg News, in 1990, with Matthew Winkler as editor-in-chief. Bloomberg.com was first established on September 29, 1993, as a financial portal with information on markets, currency conversion, news and events, and Bloomberg Terminal subscriptions. In late 1996, Bloomberg bought back one-third of Merrill Lynch's 30 percent stake in the company for $200 million, valuing the company at $2 billion. In 2008, facing losses during the financial crisis, Merrill Lynch agreed to sell its remaining 20 percent stake in the company back to Bloomberg Inc., majority-owned by Michael Bloomberg, for a reported $4.43 billion, valuing Bloomberg L.P. at approximately $22.5 billion. Bloomberg L.P. has remained a private company since its founding, the majority of which is owned by billionaire Michael Bloomberg. To run for the position of Mayor of New York against Democrat Mark Green in 2001, Bloomberg gave up his position of CEO and appointed Lex Fenwick as CEO in his stead. In 2012, Peter Grauer became the chairman of the company, a role he still holds. In 2008, Fenwick became the CEO of Bloomberg Ventures, a new venture capital division and Daniel Doctoroff, former deputy mayor in the Bloomberg administration, was named president and CEO, serving until September 2014. At that point, it was announced that Michael Bloomberg would be taking the reins of his eponymous market data company from Doctoroff, who was chief executive of Bloomberg for the past six years after his term as deputy mayor. In May 2022, Bloomberg announced it would launch a new venture in the UK, Bloomberg UK, as part of a wider international strategy. Bloomberg UK plans to hire in the region and has launched a standalone website, a weekly video series, a podcast and new event series. In August 2023, Michael Bloomberg announced a series of leadership changes for the company, with Chief Product Officer Vlad Kliatchko assuming the role of CEO and Chief Operating Officer Jean-Paul Zammitt assuming the role of President. He also announced a new board of directors which would be chaired by Mark Carney, taking over from Peter Grauer and that existing board members would move to "Emeritus Status". Mark Carney resigned from his chairmanship shortly before becoming the Prime Minister of Canada. Senior leadership Acquisitions Since its founding, Bloomberg L.P. has made several acquisitions including the radio station WNEW, BusinessWeek magazine, research company New Energy Finance, the Bureau of National Affairs and the financial software company Bloomberg PolarLake. On July 9, 2014, Bloomberg L.P. acquired RTS Realtime Systems, a global provider of low-latency connectivity and trading support services. On August 13, 2019, Bloomberg acquired RegTek.Solutions in a move to expand its suite of regulatory reporting and data management services. On March 13, 2023, Bloomberg entered into agreement to acquire Broadway Technology, a provider of high-performance trading systems and fixed income trading solutions. In 1992, Bloomberg L.P. purchased New York Radio station WNEW for $13.5 million. The station was converted into an all-news format, known as Bloomberg Radio, and the call letters were changed to WBBR. Bloomberg L.P. bought a weekly business magazine, BusinessWeek, from McGraw-Hill in 2009. The company acquired the magazine—which was suffering from declining advertising revenue and limited circulation numbers—to attract general business to its media audience composed primarily of terminal subscribers. Following the acquisition, BusinessWeek was renamed Bloomberg Businessweek. In 2018, Joel Weber was named editor of the magazine. In 2009, Bloomberg L.P. purchased New Energy Finance, a data company focused on energy investment and carbon markets research based in the United Kingdom. New Energy Finance was created by Michael Liebreich in 2004, to provide news, data and analysis on carbon and clean energy markets. Bloomberg L.P. acquired the company to become an industry resource for information to support low-carbon energy development. It was renamed to Bloomberg New Energy Finance or BNEF for short. Liebreich continued to lead the company, serving as the chief executive officer until 2014, when he stepped down as CEO but remained involved as chairman of the advisory board. BloombergNEF has expanded its research areas to cover renewable energy, advanced transport, digital industry, innovative materials, sustainability and commodities. BNEF provides research, long-term forecasts, analytical tools and global in-depth analysis covering a wide range of energy and related industries. Analysts covering 6 continents publish more than 700 research reports a year. Bloomberg L.P. purchased Arlington, Virginia-based Bureau of National Affairs in August 2011, for $990 million to bolster its existing Bloomberg Government and Bloomberg Law services. BNA publishes specialized online and print news and information for professionals in business and government. The company produces more than 350 news publications in topic areas that include corporate law and business, employee benefits, employment and labor law, environment, health and safety, health care, human resources, intellectual property, litigation, and tax and accounting. In May 2012, Bloomberg L.P. acquired Dublin-based software provider PolarLake and launched a new enterprise data management service to help companies acquire, manage, and distribute data across its organizations. On December 16, 2015, it was announced that Barclays had agreed to sell its index business, Barclays Risk Analytics and Index Solutions Ltd (BRAIS), to Bloomberg L.P. for £520 million, or about $787 million. The company will be renamed Bloomberg Index Services Limited. In September 2014, Bloomberg sold its Bloomberg Sports analysis division to the data analysis firm STATS LLC (now Stats Perform) for a fee rumored to be between $15 million and $20 million. On December 10, 2019, Bloomberg Media announced that it has reached an agreement to acquire CityLab, The Atlantic's news site covering transportation, environment, equity, life, and design. This was Bloomberg's first acquisition of an editorial property by the news and financial data company in over a decade. Products and services In 2011, sales from the Bloomberg Professional Services, also known as the Bloomberg Terminal, accounted for more than 85 percent of Bloomberg L.P.'s annual revenue. The financial data vendor's proprietary computer system, starting at $24,000 per user per year, allows subscribers to access the Bloomberg Professional service to monitor and analyze real-time financial data, search financial news, obtain price quotes and send electronic messages through the Bloomberg Messaging Service. The Terminal covers both public and private markets globally. Bloomberg News was co-founded by Michael Bloomberg and Matthew Winkler in 1990, to deliver financial news reporting to Bloomberg terminal subscribers. In 2000, Bloomberg News included more than 2,300 editors and reporters in 100 countries. Content produced by Bloomberg News is disseminated through the Bloomberg terminal, Bloomberg Television, Bloomberg Radio, Bloomberg Businessweek, Bloomberg Markets and Bloomberg.com. Since 2015, John Micklethwait has served as editor-in-chief. Bloomberg Television, a service of Bloomberg News, is a 24-hour financial news television network. It was introduced in 1994, as a subscription service transmitted on satellite television provider DirecTV, 13 hours a day, 7 days a week. The network has taken over the channel space of the defunct Financial News Network and hired most of the former FNN employees. Soon after, the network entered the cable television market and by 2000, Bloomberg's 24-hour news programming was available to 200 million households. Justin B. Smith is CEO of Bloomberg Multimedia Group which includes Bloomberg Radio, Bloomberg Television and online components of Bloomberg's multimedia offerings. Bloomberg Markets is a monthly magazine launched in 1992, that provides in-depth coverage of global financial markets for finance professionals. In 2010, the magazine was redesigned in an effort to update its readership beyond Bloomberg terminal users. Michael Dukmejian has served as the magazine's publisher since 2009. Bloomberg Pursuits was a bimonthly luxury magazine distributed to Bloomberg terminal users and to newsstands. It ceased publication in 2016. A digital edition and show on Bloomberg Television continue under the same name. Bloomberg Entity Exchange is a web-based, centralised and secure platform for buy side firms, sell side firms, corporations and insurance firms, banks or brokers to fulfill Know Your Customer (KYC) compliance requirements. It was launched on May 25, 2016. Launched in 2011, Bloomberg Government is an online service that provides news and information about politics, along with legislative and regulatory coverage. In 2009, Bloomberg L.P. introduced Bloomberg Law, a subscription service for real-time legal research. A subscription to the service provides access to law dockets, legal filings, and reports from Bloomberg legal analysts as well as business news and information. Bloomberg Opinion, formerly Bloomberg View, is an editorial division of Bloomberg News which launched in May 2011. The site provides editorial content from columnists, authors and editors about news issues and is available for free on the company's website. David Shipley, former Op-Ed page editor at The New York Times, is Bloomberg Opinion's executive editor. Bloomberg Tradebook is an electronic agency brokerage for equity, futures, options and foreign exchange trades. Its "buyside" services include access to trading algorithms, analytics and marketing insights, while its "sellside" services include connection to electronic trading networks and global trading capabilities. Bloomberg Tradebook was founded in 1996, as an affiliate of Bloomberg L.P. Bloomberg Beta is a venture capital firm capitalized by Bloomberg L.P. Founded in 2013, the $75 million fund is focused on investments in areas broadly of interest to Bloomberg L.P., and invests purely for financial return. It is headquartered in San Francisco. The Bloomberg Innovation Index is an annual ranking of how innovative countries are. It is based on six criteria: research and development, manufacturing, high-tech companies, post-secondary education, research personnel, and patents. Bloomberg uses data from the World Bank, the International Monetary Fund, the World Intellectual Property Organization, the United States Patent and Trademark Office, the OECD and UNESCO to compile the ranking. Bloomberg has openly licensed its symbology system (Bloomberg Open Symbology, BSYM), and financial data API (Bloomberg Programming API, BLPAPI). Bloomberg Live is a series of conferences targeted towards business people. Quicktake (formerly TicToc) is Bloomberg's social media brand. Originally launched on Twitter, it was expanded to other platforms including Facebook, Instagram, YouTube and is also available on Amazon's Alexa. It will also play at several screens across multiple airports in the United States and Canada. The platform is managed by a team of 70 people, consisting of editors, producers and social media specialists located across three bureaus in New York, London and Hong Kong. In December 2019, TicToc was renamed "QuickTake by Bloomberg" in order to avoid confusion with the social media platform TikTok. The camel case and the word "by" were later eliminated, and "Bloomberg" was moved in front of "Quicktake", although the two words do not always appear together. Bloomberg New Economy Forum is an invitation-only event for business executives, government officials, and academics. The inaugural event was held in 2018 in Singapore. In 2019, the forum took place in Beijing, China. In 2020 the event was held virtually. Both the 2021 and 2022 events were held in Singapore. The Bloomberg New Economy Forum Community includes leaders from the public and private sectors from around the world. Founding partners of the forum included 3M, ADNOC, Dangote, ExxonMobil, FedEx, HSBC, Hyundai, Mastercard, Microsoft, and Softbank. In 2021, the company initiated the Bloomberg New Economy Catalysts program to spotlight the work of changemakers and its impact on the world. Bloomberg Línea is a partnership launched in 2021 to serve Spanish, Portuguese and English speaking Latino audiences. Bloomberg Intelligence (BI) is Bloomberg's research division. It provides data and analytics on global markets. It is available exclusively on the Bloomberg Terminal and the Bloomberg Professional App. Offices Bloomberg L.P.'s headquarters is located in 731 Lexington Avenue (informally known as Bloomberg Tower) in Midtown Manhattan, New York City. As of 2011[update], Bloomberg L.P. occupied 900,000 square feet (84,000 m2) of office space at the base of the tower. The company's New York offices also include 400,000 square feet (37,000 m2) located at 120 Park Avenue and 924,876 square feet (85,924 m2) located at 919 Third Avenue. It maintains offices in 167 locations around the world, including Bloomberg London, its European headquarters. The Bloomberg L.P. offices are non-hierarchical – even executives do not have private offices. All employees sit at identical white desks each topped with a custom-built Bloomberg computer terminal. The office space also includes rows of flat-panel monitors overhead that display news, market data, the weather and Bloomberg customer service statistics. Bloomberg L.P.'s Management Committee includes Michael Bloomberg, Peter Grauer, and Thomas Secunda. Controversies Between 2010 and 2017, a "pay-to-play" scheme went along between two Turner Construction executives, two Bloomberg executives; and vendors and subcontractors involving interior construction at the Bloomberg offices including its headquarters at 731 Lexington Ave. In July 2020, Bloomberg's construction manager Michael Campana was sentenced to two years in prison for tax evasion on $420,000 in connection with accepting bribery. The bribery was in the form of cash, work on personal property, Super Bowl tickets and payment for Campana's wedding. On September 29, 2020, Anthony Guzzone, the Director of Global Construction at Bloomberg from 2010 and 2017, pleaded guilty to evading taxes on over $1.45 million he received in bribes from construction subcontractors in exchange for being awarded work performed for Bloomberg. Guzzone accepted more than $5.1 million in bribes. He was sentenced to prison for three years and two months in January 2021 In 1996, former Bloomberg L.P. sales representative Mary Ann Olszewski sued the company, alleging that she was drugged and raped by her supervisor, Bryan Lewis, and claimed she was terminated shortly after reporting the incident in a May 25, 1995, meeting. The lawsuit also alleged the company internally investigated Olszewski, attempting to get coworkers to portray her as "flirtatious" or a "sex hound." Olszewski also claimed that male employees at the company engaged in the "sexual degradation of women" and that the company "took no steps to prevent or curtail the ongoing sexual harassment of female employees by Michael Bloomberg." Bloomberg, on behalf of Bloomberg L.P., testified that he was made aware of the rape allegation and offered to move Olszewski into another sales unit. Bloomberg also testified that he did not find Olszewski's allegation genuine because there was not "an unimpeachable third-party witness" present during the alleged event, elaborating that "there are times when three people are together." The case was dismissed by a federal judge in 1999 after Olszewski's lawyer had ignored repeated deadlines to submit a response to a motion to end the case. The case was re-opened by another lawyer in 2000, but disappeared from the court docket in 2001. In 1997, former Bloomberg L.P. sales executive Sekiko Sakai Garrison filed a lawsuit against the company and Michael Bloomberg, alleging sexual harassment and wrongful termination. Garrison alleged that when she told Bloomberg that she was pregnant, he told her to "Kill it!" and said "Great! Number 16," referring to the number of women in the company who were pregnant or on maternity leave at the time. According to the lawsuit, Garrison told a manager about the incident but was told to "forget it ever happened" before being fired. Garrison also claimed that Bloomberg told female salespeople to "line up to give him [oral sex] as a wedding present," referring to a male employee who was getting married. The lawsuit also alleged that Bloomberg berated a female employee who had trouble finding a nanny, saying, "It's a f------ baby! All it does is eat and s---! It doesn't know the difference between you and anyone else! All you need is some black who doesn't even have to speak English to rescue it from a burning building!" The company did not admit any wrongdoing, but settled the lawsuit out of court in 2000. In September 2007, the Equal Employment Opportunity Commission (EEOC) filed a class-action lawsuit against Bloomberg L.P. on behalf of more than 80 female employees who argued that Bloomberg L.P. engaged in discrimination against women who took maternity leave. In August 2011, Judge Loretta A. Preska of the federal United States District Court for the Southern District of New York in Manhattan dismissed the charges, writing that the Equal Employment Opportunity Commission did not present sufficient evidence to support its claim. In September 2013, Preska dismissed an EEOC lawsuit on behalf of 29 pregnant employees of Bloomberg L.P. In addition, she dismissed pregnancy bias claims from five individual plaintiffs, and allowed part of the case from a sixth plaintiff to proceed. Bloomberg L.P. brought a lawsuit against the Board of Governors of the Federal Reserve System (Bloomberg L.P. v. Board of Governors of the Federal Reserve System) to force the Fed to share details about its lending programs during the U.S. Government bailout in 2008. The records documented Federal Reserve loans issued to financial firms and revealed the identities of the firms, the amounts borrowed and the collateral posted in return. Bloomberg L.P. won at the trial court level. The Second Circuit Court ruled in favor of Bloomberg L.P. in March 2010, but the case was appealed to the Supreme Court by a group of large U.S. commercial banks in October. In March 2011, the Supreme Court let stand the Second Circuit Court ruling mandating the release of Fed bailout details. On October 22, 2008, Bloomberg L.P. applied for a change of name of Bloomberg Ltd, under s. 69(1)(b) of the Companies Act 2006. Bloomberg L.P. then amended its name to Bloomberg Finance Three L.P. Bloomberg Ltd was ordered at the Company Names Tribunal on May 11, 2009, to change its name so as to not have a name that would likely interfere, by similarity, with the goodwill of Bloomberg Finance Three L.P. as well as to pay costs. According to a recent case in August 2020, Bloomberg L.P. is again being charged with discrimination against black and non-white workers. Nafeesa Syeed, who served at Bloomberg for around four years as a national security reporter and Middle East reporter, sued the corporation in New York city court for sexism based on her gender and her ethnicity as a South Asian-American. Bloomberg faces a related complaint by a former saleswoman, who filed under a pseudonym in June and the corporation is now seeking to compel a public release of her name. Throughout both cases the same law firm represented the plaintiffs. An investigation by the Intercept, the Nation, and DeSmog found that Bloomberg L.P. is one of the leading media outlets that publishes advertising for the fossil fuel industry. Journalists who cover climate change for Bloomberg News are concerned that conflicts of interest with the companies and industries that caused climate change and obstructed action will reduce the credibility of their reporting on climate change and cause readers to downplay the climate crisis. While the 2024 Russian prisoner exchange was still in progress, Bloomberg News broke a news embargo by reporting information provided by the White House. Other publications, including the Wall Street Journal, criticized Bloomberg for breaking the embargo, potentially jeopardizing the exchange, and for a Bloomberg editor's apparent boasting for being the one to first publish a breaking news story. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/SiMPLE] | [TOKENS: 858] |
Contents SiMPLE SiMPLE (a recursive acronym for SiMPLE Modular Programming Language & Environment) is a programming development system that was created to provide easy programming capabilities for everybody, especially non-professionals. Following the death of SiMPLE creator Bob Bishop, the SiMPLE Codeworks website and forums are now offline, however they can be accessed via the internet archive. History In 1995, Bob Bishop and Rich Whicker, (both former Apple Computer Engineers) decided to create a new programming language that would be easy enough for everyone to understand and use. (They felt that other existing languages such as C++ and their environments were far too complicated for beginners.) The programming language that they created was called SiMPLE. SiMPLE is vaguely reminiscent of the AppleSoft BASIC programming language that exists on Apple II computers. However, SiMPLE is not (and was never intended to be) merely a "clone" of Applesoft BASIC. It was merely "inspired" by it. There are many features of Applesoft that needed to be improved. For example, Applesoft was an interpreted language, and so it ran somewhat slowly (even for a 1MHz processor). SiMPLE, on the other hand, compiles into an executable (.EXE) file. So it produces programs that run faster, and those programs can even run on computers that don't have SiMPLE installed. Another difference between the two languages is in the use of line numbers. Applesoft required them; SiMPLE doesn't even use them. (Instead of typing program statements onto the black Apple screen, SiMPLE uses a text editor.) Furthermore the "FOR-NEXT" loops in Applesoft have been replaced by "Do-Loop" instructions in SiMPLE. (But they function in much the same way). However, aside from a few differences in their outward appearances, writing programs in SiMPLE has a similar "feel" to what one experienced when writing programs in Applesoft. For example, when using SiMPLE in command-line mode, a program is run by simply typing the word "RUN" on a black screen (just as was done on the Apple!) Versions "Simple" is a generic term for three slightly different versions of the language: Micro-SiMPLE, Pro-SiMPLE, and Ultra-SiMPLE. Prior to 2011 June, SiMPLE was available only for 32-bit computers. Since then, a newer version (which can be used on either 32-bit computers or 64-bit computers) is now the standard version. In this newer version of SiMPLE, the terms "Pro-SiMPLE" and "Ultra-SiMPLE" have been replaced by the terms "Dos-SiMPLE" and "Win-SiMPLE" respectively. However, in an effort to provide as much backward compatibility as possible, both of those obsolete terms ("Ultra-SiMPLE" and "Pro-SiMPLE") are still accepted as being legitimate compiler directives. In addition, the design of the newer version of SiMPLE is more "streamlined". The old original version of SiMPLE was designed to be used only in the closed environment of Command-line mode. (The "Drag & Drop" mode of operation wasn't added until many years later.) Consequently the old SiMPLE's Command-line mode required dozens of commands (to support such capabilities as deleting source listings, renaming files, creating new project folders, etc.). The newer version of SiMPLE integrates the SiMPLE environment with the Windows environment so that many of the old SiMPLE's Command-line commands are no longer necessary and have been eliminated. Modes of operation SiMPLE programs can be run in either "Drag & Drop" mode (intended primarily for beginning programmers), or in "Command-Line" mode (for more advanced programmers): Keywords used by SiMPLE SiMPLE will run on Windows 95 and newer systems. An example program is like the following: which will give you this output: https://web.archive.org/web/20150412025158/http://www.simplecodeworks.com/example.gif References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Weyl_metrics] | [TOKENS: 3611] |
Contents Weyl metrics In general relativity, the Weyl metrics (named after the German-American mathematician Hermann Weyl) are a class of static and axisymmetric solutions to Einstein's field equation. Three members in the renowned Kerr–Newman family solutions, namely the Schwarzschild, nonextremal Reissner–Nordström and extremal Reissner–Nordström metrics, can be identified as Weyl-type metrics. Standard Weyl metrics The Weyl class of solutions has the generic form where ψ ( ρ , z ) {\displaystyle \psi (\rho ,z)} and γ ( ρ , z ) {\displaystyle \gamma (\rho ,z)} are two metric potentials dependent on Weyl's canonical coordinates { ρ , z } {\displaystyle \{\rho \,,z\}} . The coordinate system { t , ρ , z , ϕ } {\displaystyle \{t,\rho ,z,\phi \}} serves best for symmetries of Weyl's spacetime (with two Killing vector fields being ξ t = ∂ t {\displaystyle \xi ^{t}=\partial _{t}} and ξ ϕ = ∂ ϕ {\displaystyle \xi ^{\phi }=\partial _{\phi }} ) and often acts like cylindrical coordinates, but is incomplete when describing a black hole as { ρ , z } {\displaystyle \{\rho \,,z\}} only cover the horizon and its exteriors. Hence, to determine a static axisymmetric solution corresponding to a specific stress–energy tensor T a b {\displaystyle T_{ab}} , we just need to substitute the Weyl metric Eq(1) into Einstein's equation (with c=G=1): and work out the two functions ψ ( ρ , z ) {\displaystyle \psi (\rho ,z)} and γ ( ρ , z ) {\displaystyle \gamma (\rho ,z)} . Reduced field equations for electrovac Weyl solutions One of the best investigated and most useful Weyl solutions is the electrovac case, where T a b {\displaystyle T_{ab}} comes from the existence of (Weyl-type) electromagnetic field (without matter and current flows). As we know, given the electromagnetic four-potential A a {\displaystyle A_{a}} , the anti-symmetric electromagnetic field F a b {\displaystyle F_{ab}} and the trace-free stress–energy tensor T a b {\displaystyle T_{ab}} ( T = g a b T a b = 0 ) {\displaystyle (T=g^{ab}T_{ab}=0)} will be respectively determined by which respects the source-free covariant Maxwell equations: Eq(5.a) can be simplified to: in the calculations as Γ b c a = Γ c b a {\displaystyle \Gamma _{bc}^{a}=\Gamma _{cb}^{a}} . Also, since R = − 8 π T = 0 {\displaystyle R=-8\pi T=0} for electrovacuum, Eq(2) reduces to Now, suppose the Weyl-type axisymmetric electrostatic potential is A a = Φ ( ρ , z ) [ d t ] a {\displaystyle A_{a}=\Phi (\rho ,z)[dt]_{a}} (the component Φ {\displaystyle \Phi } is actually the electromagnetic scalar potential), and together with the Weyl metric Eq(1), Eqs(3)(4)(5)(6) imply that where R = 0 {\displaystyle R=0} yields Eq(7.a), R t t = 8 π T t t {\displaystyle R_{tt}=8\pi T_{tt}} or R φ φ = 8 π T φ φ {\displaystyle R_{\varphi \varphi }=8\pi T_{\varphi \varphi }} yields Eq(7.b), R ρ ρ = 8 π T ρ ρ {\displaystyle R_{\rho \rho }=8\pi T_{\rho \rho }} or R z z = 8 π T z z {\displaystyle R_{zz}=8\pi T_{zz}} yields Eq(7.c), R ρ z = 8 π T ρ z {\displaystyle R_{\rho z}=8\pi T_{\rho z}} yields Eq(7.d), and Eq(5.b) yields Eq(7.e). Here ∇ 2 = ∂ ρ ρ + 1 ρ ∂ ρ + ∂ z z {\displaystyle \nabla ^{2}=\partial _{\rho \rho }+{\frac {1}{\rho }}\,\partial _{\rho }+\partial _{zz}} and ∇ = ∂ ρ e ^ ρ + ∂ z e ^ z {\displaystyle \nabla =\partial _{\rho }\,{\hat {e}}_{\rho }+\partial _{z}\,{\hat {e}}_{z}} are respectively the Laplace and gradient operators. Moreover, if we suppose ψ = ψ ( Φ ) {\displaystyle \psi =\psi (\Phi )} in the sense of matter-geometry interplay and assume asymptotic flatness, we will find that Eqs(7.a-e) implies a characteristic relation that Specifically in the simplest vacuum case with Φ = 0 {\displaystyle \Phi =0} and T a b = 0 {\displaystyle T_{ab}=0} , Eqs(7.a-7.e) reduce to We can firstly obtain ψ ( ρ , z ) {\displaystyle \psi (\rho ,z)} by solving Eq(8.b), and then integrate Eq(8.c) and Eq(8.d) for γ ( ρ , z ) {\displaystyle \gamma (\rho ,z)} . Practically, Eq(8.a) arising from R = 0 {\displaystyle R=0} just works as a consistency relation or integrability condition. Unlike the nonlinear Poisson's equation Eq(7.b), Eq(8.b) is the linear Laplace equation; that is to say, superposition of given vacuum solutions to Eq(8.b) is still a solution. This fact has a widely application, such as to analytically distort a Schwarzschild black hole. We employed the axisymmetric Laplace and gradient operators to write Eqs(7.a-7.e) and Eqs(8.a-8.d) in a compact way, which is very useful in the derivation of the characteristic relation Eq(7.f). In the literature, Eqs(7.a-7.e) and Eqs(8.a-8.d) are often written in the following forms as well: and Considering the interplay between spacetime geometry and energy-matter distributions, it is natural to assume that in Eqs(7.a-7.e) the metric function ψ ( ρ , z ) {\displaystyle \psi (\rho ,z)} relates with the electrostatic scalar potential Φ ( ρ , z ) {\displaystyle \Phi (\rho ,z)} via a function ψ = ψ ( Φ ) {\displaystyle \psi =\psi (\Phi )} (which means geometry depends on energy), and it follows that Eq(B.1) immediately turns Eq(7.b) and Eq(7.e) respectively into which give rise to Now replace the variable ψ {\displaystyle \psi } by ζ := e 2 ψ {\displaystyle \zeta :=e^{2\psi }} , and Eq(B.4) is simplified to Direct quadrature of Eq(B.5) yields ζ = e 2 ψ = Φ 2 + C ~ Φ + B {\displaystyle \zeta =e^{2\psi }=\Phi ^{2}+{\tilde {C}}\Phi +B} , with { B , C ~ } {\displaystyle \{B,{\tilde {C}}\}} being integral constants. To resume asymptotic flatness at spatial infinity, we need lim ρ , z → ∞ Φ = 0 {\displaystyle \lim _{\rho ,z\to \infty }\Phi =0} and lim ρ , z → ∞ e 2 ψ = 1 {\displaystyle \lim _{\rho ,z\to \infty }e^{2\psi }=1} , so there should be B = 1 {\displaystyle B=1} . Also, rewrite the constant C ~ {\displaystyle {\tilde {C}}} as − 2 C {\displaystyle -2C} for mathematical convenience in subsequent calculations, and one finally obtains the characteristic relation implied by Eqs(7.a-7.e) that This relation is important in linearize the Eqs(7.a-7.f) and superpose electrovac Weyl solutions. Newtonian analogue of metric potential Ψ(ρ,z) In Weyl's metric Eq(1), e ± 2 ψ = ∑ n = 0 ∞ ( ± 2 ψ ) n n ! {\textstyle e^{\pm 2\psi }=\sum _{n=0}^{\infty }{\frac {(\pm 2\psi )^{n}}{n!}}} ; thus in the approximation for weak field limit ψ → 0 {\displaystyle \psi \to 0} , one has and therefore This is pretty analogous to the well-known approximate metric for static and weak gravitational fields generated by low-mass celestial bodies like the Sun and Earth, where Φ N ( ρ , z ) {\displaystyle \Phi _{N}(\rho ,z)} is the usual Newtonian potential satisfying Poisson's equation ∇ L 2 Φ N = 4 π ϱ N {\displaystyle \nabla _{L}^{2}\Phi _{N}=4\pi \varrho _{N}} , just like Eq(3.a) or Eq(4.a) for the Weyl metric potential ψ ( ρ , z ) {\displaystyle \psi (\rho ,z)} . The similarities between ψ ( ρ , z ) {\displaystyle \psi (\rho ,z)} and Φ N ( ρ , z ) {\displaystyle \Phi _{N}(\rho ,z)} inspire people to find out the Newtonian analogue of ψ ( ρ , z ) {\displaystyle \psi (\rho ,z)} when studying Weyl class of solutions; that is, to reproduce ψ ( ρ , z ) {\displaystyle \psi (\rho ,z)} nonrelativistically by certain type of Newtonian sources. The Newtonian analogue of ψ ( ρ , z ) {\displaystyle \psi (\rho ,z)} proves quite helpful in specifying particular Weyl-type solutions and extending existing Weyl-type solutions. Schwarzschild solution The Weyl potentials generating Schwarzschild's metric as solutions to the vacuum equations Eq(8) are given by where From the perspective of Newtonian analogue, ψ S S {\displaystyle \psi _{SS}} equals the gravitational potential produced by a rod of mass M {\displaystyle M} and length 2 M {\displaystyle 2M} placed symmetrically on the z {\displaystyle z} -axis; that is, by a line mass of uniform density σ = 1 / 2 {\displaystyle \sigma =1/2} embedded the interval z ∈ [ − M , M ] {\displaystyle z\in [-M,M]} . (Note: Based on this analogue, important extensions of the Schwarzschild metric have been developed, as discussed in ref.) Given ψ S S {\displaystyle \psi _{SS}} and γ S S {\displaystyle \gamma _{SS}} , Weyl's metric Eq(1) becomes and after substituting the following mutually consistent relations one can obtain the common form of Schwarzschild metric in the usual { t , r , θ , ϕ } {\displaystyle \{t,r,\theta ,\phi \}} coordinates, The metric Eq(14) cannot be directly transformed into Eq(16) by performing the standard cylindrical-spherical transformation ( t , ρ , z , ϕ ) = ( t , r sin θ , r cos θ , ϕ ) {\displaystyle (t,\rho ,z,\phi )=(t,r\sin \theta ,r\cos \theta ,\phi )} , because { t , r , θ , ϕ } {\displaystyle \{t,r,\theta ,\phi \}} is complete while ( t , ρ , z , ϕ ) {\displaystyle (t,\rho ,z,\phi )} is incomplete. This is why we call { t , ρ , z , ϕ } {\displaystyle \{t,\rho ,z,\phi \}} in Eq(1) as Weyl's canonical coordinates rather than cylindrical coordinates, although they have a lot in common; for example, the Laplacian ∇ 2 := ∂ ρ ρ + 1 ρ ∂ ρ + ∂ z z {\displaystyle \nabla ^{2}:=\partial _{\rho \rho }+{\frac {1}{\rho }}\partial _{\rho }+\partial _{zz}} in Eq(7) is exactly the two-dimensional geometric Laplacian in cylindrical coordinates. Nonextremal Reissner–Nordström solution The Weyl potentials generating the nonextremal Reissner–Nordström solution ( M > | Q | {\displaystyle M>|Q|} ) as solutions to Eqs(7) are given by where Thus, given ψ R N {\displaystyle \psi _{RN}} and γ R N {\displaystyle \gamma _{RN}} , Weyl's metric becomes and employing the following transformations one can obtain the common form of non-extremal Reissner–Nordström metric in the usual { t , r , θ , ϕ } {\displaystyle \{t,r,\theta ,\phi \}} coordinates, Extremal Reissner–Nordström solution The potentials generating the extremal Reissner–Nordström solution ( M = | Q | {\displaystyle M=|Q|} ) as solutions to Eqs(7) are given by (Note: We treat the extremal solution separately because it is much more than the degenerate state of the nonextremal counterpart.) Thus, the extremal Reissner–Nordström metric reads and by substituting we obtain the extremal Reissner–Nordström metric in the usual { t , r , θ , ϕ } {\displaystyle \{t,r,\theta ,\phi \}} coordinates, Mathematically, the extremal Reissner–Nordström can be obtained by taking the limit Q → M {\displaystyle Q\to M} of the corresponding nonextremal equation, and in the meantime we need to use the L'Hospital rule sometimes. Remarks: Weyl's metrics Eq(1) with the vanishing potential γ ( ρ , z ) {\displaystyle \gamma (\rho ,z)} (like the extremal Reissner–Nordström metric) constitute a special subclass which have only one metric potential ψ ( ρ , z ) {\displaystyle \psi (\rho ,z)} to be identified. Extending this subclass by canceling the restriction of axisymmetry, one obtains another useful class of solutions (still using Weyl's coordinates), namely the conformastatic metrics, where we use λ {\displaystyle \lambda } in Eq(22) as the single metric function in place of ψ {\displaystyle \psi } in Eq(1) to emphasize that they are different by axial symmetry ( ϕ {\displaystyle \phi } -dependence). Weyl vacuum solutions in spherical coordinates Weyl's metric can also be expressed in spherical coordinates that which equals Eq(1) via the coordinate transformation ( t , ρ , z , ϕ ) ↦ ( t , r sin θ , r cos θ , ϕ ) {\displaystyle (t,\rho ,z,\phi )\mapsto (t,r\sin \theta ,r\cos \theta ,\phi )} (Note: As shown by Eqs(15)(21)(24), this transformation is not always applicable.) In the vacuum case, Eq(8.b) for ψ ( r , θ ) {\displaystyle \psi (r,\theta )} becomes The asymptotically flat solutions to Eq(28) is where P n ( cos θ ) {\displaystyle P_{n}(\cos \theta )} represent Legendre polynomials, and a n {\displaystyle a_{n}} are multipole coefficients. The other metric potential γ ( r , θ ) {\displaystyle \gamma (r,\theta )} is given by See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/The_Economist] | [TOKENS: 6206] |
Contents The Economist The Economist is a British news and current affairs journal published in a weekly print magazine format and daily on digital platforms. Variously referred to as a magazine and a newspaper, it publishes stories on topics that include economics, business, geopolitics, technology and culture. Mostly written and edited in London, it has other editorial offices in the United States and in major cities in continental Europe, Asia, and the Middle East. The publication prominently features data journalism, and has a focus on interpretive analysis over original reporting, to both criticism and acclaim. Founded in 1843, The Economist was first circulated by Scottish economist James Wilson to muster support for abolishing the British Corn Laws (1815–1846), a system of import tariffs. Over time, the newspaper's coverage expanded further into political economy and eventually began running articles on current events, finance, commerce, and British politics. Throughout the mid-to-late 20th century, it greatly expanded its layout and format, adding opinion columns, special reports, political cartoons, reader letters, cover stories, art critique, book reviews, and technology features. The paper is recognisable by its fire engine red masthead (nameplate) and illustrated, topical covers. Individual articles are written anonymously, with no byline, in order for the paper to speak as one collective voice. It is supplemented by its sister lifestyle magazine, 1843, and a variety of podcasts, films, and books. It is considered a newspaper of record in the UK. The editorial stance of The Economist primarily revolves around classical, social, and most notably economic liberalism. It has supported radical centrism, favouring policies and governments that maintain centrist politics. The newspaper typically champions economic liberalism, particularly free markets, free trade, free immigration, deregulation, and globalisation. Its extensive use of word play and high subscription price has linked the paper with a high-income elite readership, drawing both positive and negative connotations. In line with this, it claims to have an influential readership of prominent business leaders and policy-makers. History The Economist was founded by the British businessman and banker James Wilson in 1843, to advance the repeal of the Corn Laws, a system of import tariffs. A prospectus for the newspaper from 5 August 1843 enumerated thirteen areas of coverage that its editors wanted the publication to focus on: Wilson described it as taking part in "a severe contest between intelligence, which presses forward, and an unworthy, timid ignorance obstructing our progress", a phrase which still appears on its imprint (US: masthead) as the publication's mission. It has long been respected as "one of the most competent and subtle Western periodicals on public affairs". It was cited by Karl Marx in his formulation of socialist theory because Marx felt the publication epitomised the interests of the bourgeoisie. He wrote that "the London Economist, the European organ of the aristocracy of finance, described most strikingly the attitude of this class." In 1915, revolutionary Vladimir Lenin referred to The Economist as a "journal that speaks for British millionaires". Additionally, Lenin stated that The Economist held a "bourgeois-pacifist" position and supported peace out of fear of revolution. In the currency disputes of the mid-nineteenth century, the journal sided with the Banking School against the Currency School. It criticised the Bank Charter Act 1844 which restricted the amount of bank notes that the Bank of England could issue on the basis of Currency School policy encouraged by Lord Overstone, that eventually developed into monetarism. It blamed the 1857 financial crisis in Britain on 'a certain class of doctrinaires' who 'refer every commercial crisis and its disastrous consequences to "excessive issues of bank notes". It identified the causes of the financial crisis as variations in interest rates and a build-up of excess financial capital leading to unwise investments. In 1920, the paper's circulation rose to 6,170. In 1934, it underwent its first major redesign. The current fire engine red nameplate was created by Reynolds Stone in 1959. In 1971, The Economist changed its large broadsheet format into a smaller magazine-style perfect-bound formatting. In 1981 the publication introduced a North American edition after publishing the British edition since 1843; its circulation had increased more than tenfold by 2010. In January 2012, The Economist launched a new weekly section devoted exclusively to China, the first new country section since the introduction of one on the United States in 1942. In 1991, James Fallows argued in The Washington Post that The Economist used editorial lines that contradicted the news stories they purported to highlight. In 1999, Andrew Sullivan complained in The New Republic that it uses "marketing genius" to make up for deficiencies in original reporting, resulting in "a kind of Reader's Digest" for America's corporate elite. The Guardian wrote that "its writers rarely see a political or economic problem that cannot be solved by the trusted three-card trick of privatisation, deregulation and liberalisation". In 2005, the Chicago Tribune named it the best English-language paper noting its strength in international reporting where it does not feel moved to "cover a faraway land only at a time of unmitigated disaster" and that it kept a wall between its reporting and its more conservative editorial policies. In 2008, Jon Meacham, former editor of Newsweek and a self-described "fan", criticised The Economist's focus on analysis over original reporting. In 2012, The Economist was accused of hacking into the computer of Justice Mohammed Nizamul Huq of the Bangladesh Supreme Court, leading to his resignation as the chairman of the International Crimes Tribunal. In August 2015, Pearson sold its 50% stake in the newspaper to the Italian Agnelli family's investment company, Exor, for £469 million (US$531 million) and the paper re-acquired the remaining shares for £182 million ($206 million). An investigation by the Intercept, the Nation and DeSmog found that The Economist is one of the leading media outlets that publishes advertising for the fossil fuel industry. Journalists who cover climate change for The Economist are concerned that conflicts of interest with the companies and industries that caused climate change and obstructed action will reduce the credibility of their reporting on climate change and cause readers to downplay the climate crisis. Organisation The Economist is a member of the Economist Group. Pearson plc held a 50% shareholding via The Financial Times Limited until August 2015. At that time, Pearson sold their share in the Economist. The Agnelli family's Exor paid £287m to raise their stake from 4.7% to 43.4% while the Economist paid £182m for the balance of 5.04m shares which will be distributed to current shareholders. Aside from the Agnelli family, smaller shareholders in the company include Cadbury, Rothschild (21%), Schroder, Layton and other family interests as well as a number of staff and former staff shareholders. A board of trustees formally appoints the editor, who cannot be removed without its permission. The Economist Newspaper Limited is a wholly owned subsidiary of The Economist Group. Sir Evelyn Robert de Rothschild was chairman of the company from 1972 to 1989. Although The Economist has a global emphasis and scope, about two-thirds of the 75 staff journalists are based in the London borough of Westminster. However, due to half of all subscribers originating in the United States, The Economist has core editorial offices and substantial operations in New York City, Los Angeles, Chicago, and Washington D.C. The editor-in-chief, commonly known as simply "the Editor", of The Economist is charged with formulating the paper's editorial policies and overseeing corporate operations. Since its 1843 founding, the editors have been the following: Notes: Tone and voice Although it has many individual columns, by tradition and current practice the newspaper ensures a uniform voice—aided by the anonymity of writers—throughout its pages, as if most articles were written by a single author, which may be perceived to display dry, understated wit, and precise use of language. The Economist's treatment of economics presumes a working familiarity with fundamental concepts of classical economics. For instance, it does not explain terms like invisible hand, macroeconomics, or demand curve, and may take just six or seven words to explain the theory of comparative advantage. Articles involving economics do not presume any formal training on the part of the reader and aim to be accessible to the educated layperson. It usually does not translate short French and German quotes or phrases but describes the business or nature of even well-known entities, writing, for example, "Goldman Sachs, an investment bank". The Economist is known for its extensive use of word play, including puns, allusions, and metaphors, as well as alliteration and assonance, especially in its headlines and captions. This can make it difficult to understand for those who are not native English speakers. Widely considered as a magazine, The Economist has traditionally and historically persisted in referring to itself as a "newspaper", rather than a "news magazine", despite its switch from broadsheet to perfect-binding format in 1971. On its website the Economist itself clarifies: "By the time the transformation from newspaper to magazine format had been completed, the habit of referring to ourselves as "this newspaper" had stuck." The Economist's articles often take a definite editorial stance and almost never carry a byline. Not even the name of the editor is printed in the issue. It is a long-standing tradition that an editor's only signed article during their tenure is written on the occasion of their departure from the position. The author of a piece is named in certain circumstances: when notable persons are invited to contribute opinion pieces; when journalists of The Economist compile special reports (previously known as surveys); for the Year in Review special edition; and to highlight a potential conflict of interest over a book review. The names of The Economist editors and correspondents can be located on the media directory pages of the website. Online blog pieces are signed with the initials of the writer and authors of print stories are allowed to note their authorship from their personal web sites. One anonymous writer of The Economist observed: "This approach is not without its faults (we have four staff members with the initials 'J.P.', for example) but is the best compromise between total anonymity and full bylines, in our view." According to one academic study, the anonymous ethos of the weekly has contributed to strengthening three areas for The Economist: collective and consistent voice, talent and newsroom management, and brand strength. The editors say this is necessary because "collective voice and personality matter more than the identities of individual journalists", and reflects "a collaborative effort". In most articles, authors refer to themselves as "your correspondent" or "this reviewer". The writers of the titled opinion columns tend to refer to themselves by the title (hence, a sentence in the "Lexington" column might read "Lexington was informed..."). American author and long-time reader Michael Lewis criticised the paper's editorial anonymity in 1991, labelling it a means to hide the youth and inexperience of those writing articles. Although individual articles are written anonymously, there is no secrecy over who the writers are, as they are listed on The Economist's website, which also provides summaries of their careers and academic qualifications. In 2009, Lewis included multiple Economist articles in his anthology about the 2008 financial crisis, Panic: The Story of Modern Financial Insanity. John Ralston Saul describes The Economist as a newspaper that "hides the names of the journalists who write its articles in order to create the illusion that they dispense disinterested truth rather than opinion. This sales technique, reminiscent of pre-Reformation Catholicism, is not surprising in a publication named after the social science most given to wild guesses and imaginary facts presented in the guise of inevitability and exactitude. That it is the Bible of the corporate executive indicates to what extent received wisdom is the daily bread of a managerial civilization." Features The Economist's primary focus is world events, politics and business, but it also runs regular sections on science and technology as well as books and the arts. Approximately every two weeks, the publication includes an in-depth special report (previously called surveys) on a given topic. The five main categories are Countries and Regions, Business, Finance and Economics, Science, and Technology. The newspaper goes to press on Thursdays, between 6 p.m. and 7 p.m. GMT, and is available at newsagents in many countries the next day. It is printed at seven sites around the world. Since July 2007, there has also been a complete audio edition of the paper available 9 pm London time on Thursdays. The audio version of The Economist is produced by the production company Talking Issues. The company records the full text of the newspaper in MP3 format, including the extra pages in the UK edition. The weekly 130 MB download is free for subscribers and available for a fee for non-subscribers. The publication's writers adopt a tight style that seeks to include the maximum amount of information in a limited space. David G. Bradley, publisher of The Atlantic, described the formula as "a consistent world view expressed, consistently, in tight and engaging prose". The Economist frequently receives letters from its readership in response to the previous week's edition. While it is known to feature letters from senior businesspeople, politicians, ambassadors, and spokespeople, the paper includes letters from typical readers as well. Well-written or witty responses from anyone are considered, and controversial issues frequently produce a torrent of letters. For example, the survey of corporate social responsibility, published January 2005, produced largely critical letters from Oxfam, the World Food Programme, United Nations Global Compact, the Chairman of BT Group, an ex-Director of Shell and the UK Institute of Directors. In an effort to foster diversity of thought, The Economist routinely publishes letters that openly criticize the paper's articles and stance. After The Economist ran a critique of Amnesty International in its issue dated 24 March 2007, its letters page ran a reply from Amnesty, as well as several other letters in support of the organisation, including one from the head of the United Nations Commission on Human Rights. Rebuttals from officials within regimes such as the Singapore government are routinely printed, to comply with local right-of-reply laws without compromising editorial independence. Letters published in the paper are typically between 150 and 200 words long and had the now-discontinued salutation 'Sir' from 1843 to 2015. In the latter year, upon the appointment of Zanny Minton Beddoes, the first female editor, the salutation was dismissed; letters have since had no salutation.[citation needed] Prior to a change in procedure, all responses to online articles were published in "The Inbox".[citation needed] The publication runs several opinion columns whose names reflect their topic: Every three months, The Economist publishes a technology report called Technology Quarterly, or simply, TQ, a special section focusing on recent trends and developments in science and technology. The feature is also known to intertwine "economic matters with a technology". The TQ often carries a theme, such as quantum computing or cloud storage, and assembles an assortment of articles around the common subject. In September 2007, The Economist launched a sister lifestyle magazine under the title Intelligent Life as a quarterly publication. At its inauguration it was billed as for "the arts, style, food, wine, cars, travel and anything else under the sun, as long as it's interesting". The magazine focuses on analysing the "insights and predictions for the luxury landscape" across the world. Approximately ten years later, in March 2016, the newspaper's parent company, Economist Group, rebranded the lifestyle magazine as 1843, in honour of the paper's founding year. It has since remained at six issues per year and carries the motto "Stories of An Extraordinary World". Unlike The Economist, the author's names appear next to their articles in 1843. 1843 features contributions from Economist journalists as well as writers around the world and photography commissioned for each issue. It is seen as a market competitor to The Wall Street Journal's WSJ. and the Financial Times' FT Magazine. Since its March 2016 relaunch, it has been edited by Rosie Blau, a former correspondent for The Economist. In May 2020 it was announced that the 1843 magazine would move to a digital-only format. The paper also produces two annual reviews and predictive reports titled The World In [Year] and The World If [Year] as part of their The World Ahead franchise. In both features, the newspaper publishes a review of the social, cultural, economic and political events that have shaped the year and will continue to influence the immediate future. The issue was described by the American think tank Brookings Institution as "The Economist's annual [150-page] exercise in forecasting". Translated versions of The World In [Year] are distributed, e.g. by Jang Group in Pakistan (Urdu) en by Roularta in Flanders (Dutch). In 2013, The Economist began awarding a 'Country of the Year' in its annual Christmas special editions. Selected by the newspaper, this award recognises the country that was 'most improved' over the preceding year. In addition to publishing its main newspaper, lifestyle magazine, and special features, The Economist also produces books with topics overlapping with that of its newspaper. The weekly also publishes a series of technical manuals (or guides) as an offshoot of its explanatory journalism. Some of these books serve as collections of articles and columns the paper produces. Often columnists from the newspaper write technical manuals on their topic of expertise; for example, Philip Coggan, a finance correspondent, authored The Economist Guide to Hedge Funds (2011). The paper publishes book reviews in every issue, with a large collective review in their year-end (holiday) issue – published as "The Economist's Books of the Year". Additionally, the paper has its own in-house stylebook rather than following an industry-wide writing style template. All Economist writing, and publications follow The Economist Style Guide, in various editions. The Economist sponsors a wide array of writing competitions and prizes throughout the year for readers. In 1999, The Economist organised a global futurist writing competition, The World in 2050. Co-sponsored by Royal Dutch/Shell, the competition included a first prize of US$20,000 and publication in The Economist's annual flagship publication, The World In. Over 3,000 entries from around the world were submitted via a website set up for the purpose and at various Royal Dutch Shell offices worldwide. The judging panel included Bill Emmott, Esther Dyson, Sir Mark Moody-Stuart, and Matt Ridley. In the summer of 2019, they launched the Open Future writing competition with an inaugural youth essay-writing prompt about climate change. During this competition the paper accepted a submission from an artificially-intelligent computer writing program. Since 2006, The Economist has produced several podcast series. The podcasts currently in production include: Additionally, The Economist has produced several limited-run podcast series, such as The Prince (on Xi Jinping), Next Year in Moscow (on Russian emigrants and dissidents following the 2022 invasion of Ukraine), Boss Class (on business management) and Scam Inc, an 8-part series about the growing business and impact of scams. In September 2023, The Economist announced the launch of Economist Podcasts+, a paid subscription service for its podcast offerings. In 2014 The Economist launched its short-form news app Espresso. The product offers a daily briefing from the editors, published every day of the week except Sunday. The app is available to paid subscribers and as a separate subscription. Data journalism The presence of data journalism in The Economist can be traced to its founding year in 1843. Initially, the weekly published basic international trade figures and tables. The paper first included a graphical model in 1847—a letter featuring an illustration of various coin sizes—and its first non-epistolary chart—a tree map visualising the size of coal fields in America and England—was included in November 1854. This early adoption of data-based articles was estimated to be "a 100 years before the field's modern emergence" by Data Journalism.com. Its transition from broadsheet to magazine-style formatting led to the adoption of coloured graphs, first in fire-engine-red during the 1980s and then in a thematic blue in 2001. The Economist's editors and readers developed a taste for more data-driven stories throughout the 2000s. Starting in the late-2000s, the paper began to publish more and more articles that centred solely on charts, some of which were published online every weekday. These "daily charts" are typically followed by a short, 500-word explanation. In September 2009, The Economist launched a Twitter account for their Data Team. In 2015, the data-journalism department—a dedicated team of data journalists, visualisers and interactive developers—was created to head up the paper's data journalism efforts. The team's output soon included election forecasting models, covering the French presidential elections of 2017 and 2022 and the US presidential and congressional elections in 2020, among others. In late-2023, the data team advertised for a political data scientist to bolster its political forecasting efforts. In order to ensure transparency in the team's data collection and analysis The Economist maintains a corporate GitHub account to publicly disclose their models and software wherever possible. In October 2018, they introduced a "Graphic Detail" featuring large charts and maps in both their print and digital editions which ran until November 2023. Historically, the publication has also maintained a section of economic statistics, such as employment figures, economic growth, and interest rates. These statistical publications have been found to be seen as authoritative and decisive in British society. The Economist also publishes a variety of rankings seeking to position business schools and undergraduate universities among each other, respectively. In 2015, they published their first ranking of U.S. universities, focusing on comparable economic advantages. Their data for the rankings is sourced from the U.S. Department of Education and is calculated as a function of median earnings through regression analysis. Among others, the most well-known data indexes the weekly publishes are: Opinions The editorial stance of The Economist primarily revolves around classical, social, and most notably, economic liberalism. Since its founding, it has supported radical centrism, favouring policies and governments that maintain centrist politics. The Economist typically champions neoliberalism, particularly free markets, free trade, free immigration, deregulation, and globalisation. When The Economist was founded, the term economism denoted what would today be termed "economic liberalism". The activist and journalist George Monbiot has described it as neoliberal while occasionally accepting the propositions of Keynesian economics where deemed more "reasonable". The Economist favours a carbon tax to fight global warming. According to one former editor, Bill Emmott, "The Economist's philosophy has always been liberal, not conservative". Alongside other publications such as The Guardian, The Observer and The Independent, it supports the United Kingdom becoming a republic. Individual contributors take diverse views. The Economist favours the support, through central banks, of banks and other important corporations. This principle can, in a much more limited form, be traced back to Walter Bagehot, the third editor of The Economist, who argued that the Bank of England should support major banks that got into difficulties. Karl Marx deemed The Economist the "European organ" of "the aristocracy of finance". The Economist has also supported liberal causes on social issues such as recognition of gay marriages, legalisation of drugs, criticises the U.S. tax model, and seems to support some government regulation on health issues, such as smoking in public, as well as bans on smacking children. The Economist consistently favours guest worker programmes, parental choice of school, and amnesties, and once published an "obituary" of God. The Economist also has a long record of supporting gun control. In 2021, it was criticized for publishing an "anti-transgender screed". In 2019, The Economist received backlash for suggesting that transgender people should be sterilized. It subsequently apologized for this statement. In British general elections, The Economist has endorsed the Labour Party (in 2005 and 2024), the Conservative Party (in 2010 and 2015), and the Liberal Democrats (in 2017 and 2019), and supported both Republican and Democratic candidates in the United States. The Economist put its stance this way: What, besides free trade and free markets, does The Economist believe in? "It is to the Radicals that The Economist still likes to think of itself as belonging. The extreme centre is the paper's historical position". That is as true today as when Crowther [Geoffrey, Economist editor 1938–1956] said it in 1955. The Economist considers itself the enemy of privilege, pomposity and predictability. It has backed conservatives such as Ronald Reagan and Margaret Thatcher. It has supported the Americans in Vietnam. But it has also endorsed Harold Wilson and Bill Clinton, and espoused a variety of liberal causes: opposing capital punishment from its earliest days, while favouring penal reform and decolonisation, as well as—more recently—gun control and gay marriage. In 2008, The Economist commented that Cristina Fernández de Kirchner, the president of Argentina at the time, was "Dashing hopes of change, Argentina's new president is leading her country into economic peril and social conflict". The Economist also called for Bill Clinton's impeachment, as well as for Donald Rumsfeld's resignation after the emergence of the Abu Ghraib torture and prisoner abuse. Although The Economist initially gave vigorous support for the U.S.-led invasion of Iraq, it later called the operation "bungled from the start" and criticised the "almost criminal negligence" of the Bush Administration's handling of the Iraq War, while maintaining in 2007 that pulling out in the short term would be irresponsible. In an editorial marking its 175th anniversary, The Economist criticised adherents to liberalism for becoming too inclined to protect the political status quo rather than pursue reform. It called on liberals to return to advocating for bold political, economic and social reforms: protecting free markets, land and tax reform in the tradition of Georgism, open immigration, a rethink of the social contract with more emphasis on education, and a revival of liberal internationalism. Circulation Each of The Economist issues' official date range is from Saturday to the following Friday. The Economist posts each week's new content online at approximately 21:00 Thursday evening UK time, ahead of the official publication date. From July to December 2019, their average global print circulation was over 909,476. As of September 2025, combined print and digital circulation was reported at 1.255 million. However, on a weekly average basis, the paper can reach up to 5.1 million readers, across their print and digital runs. Across their social media platforms, it reaches an audience of 35 million, as of 2016. Circulation grew steadily in its early years. In 1844, it was 1,800, 3,800 in 1850 and had risen to 4,300 by 1854. In 1877, the publication's circulation was 3,700, and in 1920 it had risen to 6,000. Circulation increased rapidly after 1945, reaching 100,000 by 1970. Circulation is audited by the Audit Bureau of Circulations (ABC). From around 30,000 in 1960 it has risen to near 1 million by 2000 and by 2016 to about 1.3 million. Approximately half of all sales (54%) originate in the United States with sales in the United Kingdom making 14% of the total and continental Europe 19%. Of its American readers, two out of three earn more than $100,000 a year. The Economist has sales, both by subscription and at newsagents, in over 200 countries. The Economist once boasted about its limited circulation. In the early 1990s it used the slogan "The Economist – not read by millions of people". Geoffrey Crowther, a former editor, wrote: "Never in the history of journalism has so much been read for so long by so few." Censorship Sections of The Economist criticising authoritarian regimes are frequently removed from the paper by the authorities in those countries. On 15 June 2006, Iran banned the sale of The Economist when it published a map labelling the Persian Gulf simply as Gulf—a choice that derives its political significance from the Persian Gulf naming dispute. In 29 May 2025, Reuters reported that Vietnam had banned the May 24 issue featuring the president Tô Lâm that described him as "an ambitious leader who emerged "from the security state" and who "must turn himself into a reformer" to adjust the country's economic model and make it richer". In a separate incident, the government of Zimbabwe went further and imprisoned The Economist's correspondent there, Andrew Meldrum. The government charged him with violating a statute on "publishing untruth" for writing that a woman was decapitated by supporters of the ruling Zimbabwe African National Union – Patriotic Front party. The decapitation claim was retracted, and allegedly fabricated by the woman's husband. The correspondent was later acquitted, only to receive a deportation order. On 19 August 2013, The Economist disclosed that the Missouri Department of Corrections had censored its issue of 29 June 2013. According to the letter sent by the department, prisoners were not allowed to receive the issue because "1. it constitutes a threat to the security or discipline of the institution; 2. may facilitate or encourage criminal activity; or 3. may interfere with the rehabilitation of an offender". See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Financial_market_participants#Retail_investor] | [TOKENS: 668] |
Contents Financial market participants There are two basic financial market participant distinctions, investors versus speculators and institutional versus retail. Action in financial markets by central banks is usually regarded as intervention rather than participation. Supply side versus demand side A market participant may either be coming from the supply side, hence supplying excess money (in the form of investments) in favor of the demand side; or coming from the demand side, hence demanding excess money (in the form of borrowed equity) in favor of the supply side. This equation originated from Keynesian advocates. The theory explains that a given market may have excess cash; hence the supplier of funds may lend it; and those in need of cash may borrow the funds supplied. Hence, the equation: aggregate savings equals aggregate investments. The demand side consists of: those in need of cash flows (daily operational needs); those in need of interim financing (bridge financing); those in need of long-term funds for special projects (capital funds for venture financing). The supply side consists of: those who have aggregate savings (retirement funds, pension funds, insurance funds) that can be used in favor of demand side. The origin of the savings (funds) can be local savings or foreign savings. So much pensions or savings can be invested for school buildings; orphanages; (but not earning) or for road network (toll ways) or port development (capable of earnings). The earnings go to owner (Savers or Lenders) and the margin goes to the banks. When the principal and interest are added up, it will reflect the amount paid for the user (borrower) of the funds. Thus, an interest percentage for the cost of using the funds. Investor versus speculator An investor is any party that makes an investment. However, the term has taken on a specific meaning in finance to describe the particular types of people and companies that regularly purchase equity or debt securities for financial gain in exchange for funding an expanding company. Less frequently the term is applied to parties who purchase real estate, currency, commodity derivatives, personal property, or other assets. Speculation, in the narrow sense of financial speculation, involves the buying, holding, selling, and short-selling of stocks, bonds, commodities, currencies, collectibles, real estate, derivatives or any valuable financial instrument to profit from fluctuations in its price as opposed to buying it for use or for income via methods such as dividends or interest. Speculation represents one of three market roles in western financial markets, distinct from hedging, long term investing and arbitrage. Speculators in an asset may have no intention to have long term exposure to that asset. Institutional versus retail An institutional investor is an investor, such as a bank, insurance company, retirement fund, hedge fund, or mutual fund, that is financially sophisticated and makes large investments, often held in very large portfolios of investments. Because of their sophistication, institutional investors may often participate in private placements of securities, in which certain aspects of the securities laws may be inapplicable. A retail investor is an individual investor possessing shares of a given security. Retail investors can be further divided into two categories of share ownership: In the United States, as of 2005 about 57 million households owned stocks, and in total, individual investors owned 26% of equities. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Stability_AI] | [TOKENS: 758] |
Contents Stability AI Stability AI Ltd is a UK-based artificial intelligence company, best known for its text-to-image model Stable Diffusion. History and founding Stability AI was founded in 2019 by Emad Mostaque and Cyrus Hodes. In August 2022 Stability AI rose to prominence with the release of its source and weights available text-to-image model Stable Diffusion. On March 23, 2024, Emad Mostaque stepped down from his position as CEO. The board of directors appointed COO, Shan Shan Wong, and CTO, Christian Laforte, as the interim co-CEOs of Stability AI. On June 25, 2024, Prem Akkaraju, former CEO of visual effects company Weta Digital, was appointed CEO of the company. Funding and investors A notable milestone in the company's funding history was a $101 million investment round led by Coatue and Lightspeed Venture Partners, with O’Shaughnessy Ventures LLC also participating. On June 25, 2024, alongside announcing Prem Akkaraju as the new CEO, Stability AI also announced they had closed an initial round of investment from world-class investor groups such as Greycroft, Coatue Management, Sound Ventures, Lightspeed Venture Partners, and O’Shaughnessy Ventures. Sean Parker, entrepreneur, philanthropist, and former President of Facebook, joined the Stability AI Board as Executive Chairman. On September 24, 2024, Stability AI announced that filmmaker, technology innovator, and visual effects pioneer James Cameron had joined its Board of Directors. Product and application Stability AI has made contributions to the field of generative AI, most notably through Stable Diffusion. This AI model allows images to be generated from textual descriptions. Beyond Stable Diffusion, Stability AI also develops Video, Audio, 3D, and text models. Stability AI has partnered with Arm to optimize its text-to-audio model, Stable Audio Open, for mobile devices powered by Arm CPUs. Litigation In July 2023, Stability AI co-founder Cyrus Hodes filed a lawsuit in the US District Court for the Northern District of California against CEO Emad Mostaque and the company, alleging fraud, misrepresentation, and breach of fiduciary duty. Hodes claimed that Mostaque deceived him into selling his 15% stake in the company for $100 in two transactions in October 2021 and May 2022, based on false representations that Stability AI was essentially worthless. Just three months after the final transaction, Stability AI raised $101 million in funding at a valuation of $1 billion. The lawsuit alleges that at current valuations, Hodes’ stake would be worth over $150 million. Hodes also accused Mostaque of embezzling company funds to pay for personal expenses, including rent for a lavish London apartment, luxury shopping sprees, and a $90,000 diamond ring purchased by Mostaque's wife using company funds. Separately, Stability AI has faced legal challenges from Getty Images. Getty Images filed two lawsuits against the company, one in the High Court of England and Wales in London and another in Delaware federal court. The lawsuit in Delaware alleged the company misused over 12 million photos from its collection to train its AI image-generation system, Stable Diffusion. This lawsuit, filed in Delaware federal court, is part of broader concerns about the use of copyrighted material in AI training datasets. Getty Images alleges that Stability AI copied these images without proper licensing to enhance Stable Diffusion’s ability to generate accurate depictions from user prompts. In November 2025, the High Court of England and Wales ruled that Stability AI was not guilty of copyright infringement despite using Getty Images as a training set for its image generator. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-controllersquare-174] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-FOOTNOTEFoner1988xxv-133] | [TOKENS: 17273] |
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America) |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Complex_adaptive_system] | [TOKENS: 1520] |
Contents Complex adaptive system A complex adaptive system (CAS) is a system that is complex in that it is a dynamic network of interactions, but the behavior of the ensemble may not be predictable according to the behavior of the components. It is adaptive in that the individual and collective behavior mutate and self-organize corresponding to the change-initiating micro-event or collection of events. It is a "complex macroscopic collection" of relatively "similar and partially connected micro-structures" formed in order to adapt to the changing environment and increase their survivability as a macro-structure. The Complex Adaptive Systems approach builds on replicator dynamics. The study of complex adaptive systems, a subset of nonlinear dynamical systems, is an interdisciplinary matter that attempts to blend insights from the natural and social sciences to develop system-level models and insights that allow for heterogeneous agents, phase transition, and emergent behavior. Overview The term complex adaptive systems, or complexity science, is often used to describe the loosely organized academic field that has grown up around the study of such systems. Complexity science is not a single theory—it encompasses more than one theoretical framework and is interdisciplinary, seeking the answers to some fundamental questions about living, adaptable, changeable systems. Complex adaptive systems may adopt hard or softer approaches. Hard theories use formal language that is precise, tend to see agents as having tangible properties, and usually see objects in a behavioral system that can be manipulated in some way. Softer theories use natural language and narratives that may be imprecise, and agents are subjects having both tangible and intangible properties. Examples of hard complexity theories include complex adaptive systems (CAS) and viability theory, and a class of softer theory is Viable System Theory. Many of the propositional consideration made in hard theory are also of relevance to softer theory. From here on, interest will now center on CAS. The study of CAS focuses on complex, emergent and macroscopic properties of the system. John H. Holland said that CAS "are systems that have a large numbers of components, often called agents, that interact and adapt or learn." Typical examples of complex adaptive systems include: climate; cities; firms; markets; governments; industries; ecosystems; social networks; power grids; animal swarms; traffic flows; social insect (e.g. ant) colonies; the brain and the immune system; and the cell and the developing embryo. Human social group-based endeavors, such as political parties, communities, geopolitical organizations, war, supply chains and terrorist networks are also considered CAS. The internet and cyberspace—composed, collaborated, and managed by a complex mix of human–computer interactions, is also regarded as a complex adaptive system. CAS can be hierarchical, but more often exhibit aspects of "self-organization". The term complex adaptive system was coined in 1968 by sociologist Walter F. Buckley who proposed a model of cultural evolution which regards psychological and socio-cultural systems as analogous with biological species. In the modern context, complex adaptive system is sometimes linked to memetics, or proposed as a reformulation of memetics. Michael D. Cohen and Robert Axelrod however argue the approach is not social Darwinism or sociobiology because, even though the concepts of variation, interaction and selection can be applied to modelling 'populations of business strategies', for example, the detailed evolutionary mechanisms are often distinctly unbiological. As such, complex adaptive system is more similar to Richard Dawkins's idea of replicators. What distinguishes a complex adaptive system (CAS) from a pure multi-agent system (MAS) is the focus on top-level properties and features like self-similarity, complexity, emergence and self-organization. Theorists define an MAS as a system composed of multiple interacting agents; whereas in CAS, the agents as well as the system are adaptive and the system is self-similar. A CAS is a complex, self-similar collectivity of interacting, adaptive agents. Complex adaptive systems feature a high degree of adaptive capacity, giving them resilience in the face of perturbation. Other important properties include adaptation (or homeostasis), communication, cooperation, specialization, spatial and temporal organization, and reproduction. Such properties can manifest themselves on all levels: cells specialize, adapt and reproduce themselves just like larger organisms do. Communication and cooperation take place on all levels, from the agent- to the system-level. In some cases the forces driving co-operation between agents in such a system can be analyzed using game theory. Some of the most important characteristics of complex adaptive systems are: Robert Axelrod & Michael D. Cohen identify a series of key terms from a modeling perspective: Turner and Baker synthesized the characteristics of complex adaptive systems from the literature and tested these characteristics in the context of creativity and innovation. Each of these eight characteristics had been shown to be present in the creativity and innovative processes: Adaptation mechanisms The organisation of a complex adaptive system rely on the use of internal models, mental models or schemas guiding the behaviors of the system. We can distinguish three levels of adaptation of a system: Modelling and simulation CAS are occasionally modelled by means of agent-based models and complex network-based models. Agent-based models are developed by means of various methods and tools primarily by means of first identifying the different agents inside the model. Another method of developing models for CAS involves developing complex network models by means of using interaction data of various CAS components. Models and simulations are often used to study proposed systems phenomena in large infrastructural systems, where empirical testing would be prohibitively expensive and risky. Examples include those use of applied agent-based and graph-theoretic approaches to digital supply-chain twins and anomaly detection in high-speed networks. In 2013 SpringerOpen/BioMed Central launched an online open-access journal on the topic of complex adaptive systems modelling (CASM). Publication of the journal ceased in 2020. Evolution of complexity Living organisms are complex adaptive systems. Although complexity is hard to quantify in biology, evolution has produced some remarkably complex organisms. This observation has led to the common misconception of evolution being progressive and leading towards what are viewed as "higher organisms". If this were generally true, evolution would possess an active trend towards complexity. As shown below, in this type of process the value of the most common amount of complexity would increase over time. Indeed, some artificial life simulations have suggested that the generation of CAS is an inescapable feature of evolution. However, the idea of a general trend towards complexity in evolution can also be explained through a passive process. This involves an increase in variance but the most common value, the mode, does not change. Thus, the maximum level of complexity increases over time, but only as an indirect product of there being more organisms in total. This type of random process is also called a bounded random walk. In this hypothesis, the apparent trend towards more complex organisms is an illusion resulting from concentrating on the small number of large, very complex organisms that inhabit the right-hand tail of the complexity distribution and ignoring simpler and much more common organisms. This passive model emphasizes that the overwhelming majority of species are microscopic prokaryotes, which comprise about half the world's biomass and constitute the vast majority of Earth's biodiversity. Therefore, simple life remains dominant on Earth, and complex life appears more diverse only because of sampling bias. If there is a lack of an overall trend towards complexity in biology, this would not preclude the existence of forces driving systems towards complexity in a subset of cases. These minor trends would be balanced by other evolutionary pressures that drive systems towards less complex states. See also References Literature External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Klabautermann] | [TOKENS: 3171] |
Contents Klabautermann A Klabautermann (German: [klaˈbaʊtɐˌman] ⓘ) "hobgoblin"; or Kalfater ("caulker") is a water kobold in Frisian, German and Dutch folklore that assists sailors and fishermen on the North Sea in their duties. Dutch/Belgian tales of kaboutermanneken described them as cave dwellers in mountains, who may help out humans who put out offerings of bread and butter, sometimes out in the open, but other times at their millhouse or farmstead. Nomenclature The Klabautermann (also spelt Klaboterman, Klabotermann, Kalfatermann), sometimes even referred to by the name "kobold" is a creature from the beliefs of fishermen and sailors of Germany's north coast, the Low Countries (Netherlands, etc.) in the North Sea and the Baltic countries as well. The Estonian counterpart are called kotermann or potermann, borrowed from foreign speech. An etymology deriving the name from the verb kalfatern ("to caulk") has been suggested by the linguist Friedrich Kluge, who considered "Klabautermann" merely to be a variant on "Kalfater" or "caulker" (attested by Temme). This was accepted by Germanist Wolfgang Stammler [de] (d. 1965) and has come to be regarded as the explanation "held in favor" for its word origin. The Grimms' dictionary had listed the forms klabatermann, klabotermann, klaboltermann, and kabautermännchen and conjectured the word to derive from Low German klabastern 'to knock, or rap'. It was evidently a piece of folk etymology told by lore informants that the name klabatermann derived from the noises they made. Elsewhere, Grimms' dictionary under "kobold" cites Cornelis Kiliaan's Dutch-Latin dictionary (1620) conjecturing that kaboutermann may derive from cobalus/κόβαλος, where it is glossed in Latin as a "human-imitating demon", and German kobal given as equivalent. Grimm also left a note that the Klabautermann could be tied to the shorter Dutch form kabout meaning "house spirit", found in an 1802 dictionary. His name has been etymologically related to the caulking hammer, perhaps bridging a gap between the "caulk" and "noise" theories. Heinrich Schröder [de] thought an earlier form *Klautermann could be reconstructed, derived from verb klettern 'to climb'. The Klabautermann, has been classed as a ship-kobold by some sources. Müllenhoff's anthology placed No. 431 "Das Klabautermännchen" in the category of "House-kobolds Hauskobolde" Nos. 430–452. Ludwig Bechstein discusses klabautermann alongside the nis or nis-puk of Northern Germany as being both water sprites as well as house sprites. His chapter under the German title "klabautermännchen" discusses folklore cave-dwelling earth spirits, localized in the Netherlands, where they are called in Dutch kaboutermanneken (cf. § Kaboutermanneken below). The Klabautermann possibly assimilates or conflates some of the lore of other spiritual beings, such as the Danish skibsnisse or "ship sprite" and the household spirit puk of Northern Germany (cog. puck of English folklore). General description The Klabautermann only shows itself if the ship is doomed to sink, according to lore. Only a few have [lived to] see it, since seeing it was bad luck. The sight of a Klabautermann is an ill omen, and in the 19th century, it was the most feared sight among sailors. However, when it does appear to humans, it typically appears as a small humanlike figure carrying a tobacco pipe and wearing a nightcap-style sailor's cap and a red or grey jacket. According to one source, the fiery red-headed and white-bearded sprite has green teeth, wears yellow hoses with riding boots, and a "steeple-crowned" pointy hat. The rarely seen klabautermann (aka Kalfater or "caulker"), according to Pomeranian sources, is about two feet tall, wears a red jacket, a sailor's wide trousers, and a round hat, but others say he is completely naked. Or it may appear in the guise of the ship's carpenter. The physical descriptions are many and varied according to various sources, as collected by Buss. This likeness is carved and attached to the mast as a symbol of good luck.[citation needed] An oral source stated there was a way to catch sight of it without danger. One must go alone at night between 12 and 1 o'-clock to the capstan-hole (German: Spillloch), and look between the legs and past the hole. Then the spirit can be seen standing in front of the hole. But if it appears naked, no article of clothing must be given by any means, for it will be enraged at being pitied upon. The Klabautermann is associated with the wood of the ship on which it lives. He enters the ship via the wood used to build it. A belief existed that if a stillborn or unbaptized child was buried in the heath under a tree, and the wood was then used to build a ship, the child's soul in the form of the klabautermann would transfer onto that ship.(Also, the superstition recorded from the island of Rügen held that a child who suffered a fracture can be helped towards healing by passing him over a split oak three times at sunrise; that oak bound back together and allowed to grow would eventually host the soul of the mended person, which became a Klabauterman when this timber was used. Feilberg on his monograph on the nisse compares these German examples of skibnsisse to the more general Danish belief that a person's soul, or a wight (vætte) resides in any tree that needs be harvested for timber[a]. But the ship's unsinkability was then assured by the spirit's presence. Its presence aboard ship is said to ward against illness, fire, even pirate attack. But there will eventually come a time when the spirit gives up and determines the vessel's seaworthiness will not hold, and decides to leave, in which case the ship is forlorn and is bound to sink (cf. below). He is said to be usually sitting under the capstan (Ankerwinde, "anchor windlass"). But he makes himself useful to the needs of the ship when in disrepair or struck by a squall, etc., preventing the ship from sinking. Thus he may help pump water from the hold, arrange cargo or ballasts, and hammer away to plug a leak that has sprung until a carpenter arrives at the scene. Objects broken on the ship by day will be magically repaired during the night by the sprite, so that he is also called Klütermann or "joiner", "repairman". However, they can also prankishly tangle up the lines if shipmates are callous about maintaining their tackle. Other informants say that a klabautermann in a bad mood will indicate by noisy actions, throwing firewood around, rapping on the ship's hull, breaking objects, and finally even slapping around the crewmen, thus acquiring his name as noisemaker. When the ship is beyond saving and will sink, he again turns into a poltergeist, the rancor of him running up and down the ladder of the ship will be heard, ropes will rattle, and the hold will make noises (or it may climb to the tip of the "bow-sprit Boogsprit" or fore-mast and splash into water), at which point it is time for the crew to abandon ship. But others say the ship will remain seaworthy and will not sink, that is until he leaves. The Klabautermann's benevolent behaviour lasts as long as the crew and captain treat the creature respectfully. A Klabautermann will not leave its ship until it is on the verge of sinking. To this end, superstitious sailors in the 19th century demanded that others pay the Klabautermann respect. Ellett has recorded one rumour that a crew even threw its captain overboard for denying the existence of the ship's Klabautermann. Heinrich Heine has reported that one captain created a place for his ship's Klabautermann in his cabin and that the captain offered the spirit the best food and drink he had to offer. The Klabautermann is easily angered. Its ire manifests in pranks such as tangling ropes and laughing at sailors who shirk their chores. More recently, the Klabautermann is sometimes described as having more sinister attributes, and blamed for things that go wrong on the ship. This incarnation of the Klabautermann is more demon- or goblin-like, prone to play pranks and, eventually, dooming the ship and her crew.[citation needed] This deterioration of the Klabautermann's image probably stems from sailors, upon returning home, telling stories of their adventures at sea. Since life at sea can be rather dull, all creatures—real, mythical, and in between—eventually became the focus of rather ghastly stories.[original research?] Bechstein applies the Germanized name Klabautermännchen, which he describes as dwarf-like earth spirits dwelling in caves, and are reputed to live in particular areas, of Holland; they are known in Dutch as the kaboter or the Kaboutermanneken. These tales have previously appeared in Johann Wilhelm Wolf [de]'s anthology of Dutch folklore. According to one anecdote, there was a small hill called Kabouterberg, riddled with caverns, where the kaboutermanneken dwelled; this hill was situated near the village of Gelrode (outskirts of Aarschot, Belgium). The miller could leave out his worn-out millstone and hope to have it sharpened by the sprite by offering bread and butter with beer; it would also wash linen. A different version places the Kaboutermannekensberg between Turnhout and Kasterlee in the Belgian part of the Kempen region, with a generally evil reputation of stealing livestock, money, even kitchen utensils. But a miller in Kempenland did obtain the help of the mysterious being who performed work overnight in exchange for the bribe of bread and butter. But after remaining hid to spy on this kaboutermanneken, he discovered the sprite to be stark naked. Then he made the mistake of leaving him clothing, which the sprite gladly took, but would not return to the mill afterwards. The miller attempted to catch the sprite gone wayward, but was outwitted. According to a version from Landorp [nl] (North Brabant province, Netherlands) the klaboutermanneken would do all sorts of household chores: make coffee, milk the cows, clean, and even do the favor of ferrying a man across the Demer. But it played favoritism, and tormented the neighbors with endless pranks, drinking their cow's milk and spoiling their butter. Beings called Rothmützchen ("redcap" from German Mütze) or klabber reputedly multiplied wood, or rather, it would bring a few scrawny twigs which appeared not to serve much use as kindling, yet once ignited maintained as much fire as a bundle of wood. In one tale, the kaboutermanneken aided a young man to marry a rich man's daughter by boosting the amount of guilders in his possession from eight hundred to a thousand, the amount stipulated by the bride's father as condition for marriage. Bechstein's embellishment makes the youth only have a paltry sum: "not even a hundred Batzen", or only a few guilders. Origins Belief in the Klabautermann dates to at least the 1770s according to the oral source who told Heinrich Heine in 1820s that the lore went back at least fifty years, however, none of the attestations antedate c. 1810s, i.e. no written records exist that are a more than a decade older than when collection of legends were begun in the 1820s. The two early folkloric sources both come from the North Sea, collected by T. F. M. Richter (1806) from Dutch sailors, and by Heinrich Heine from a sea captain of the Frisian island of Norderney. German writer Heinrich Smidt believed that the sea kobolds, or Klabautermann, entered German folklore via German sailors who had learned about them in England. However, historians David Kirby and Merja-Liisa Hinkkanen dispute this, finding no evidence of such a belief in Britain. An alternate view connects the Klabautermann myths with the story of Saint Phocas of Sinope. As that story spread from the Black Sea to the Baltic Sea. Scholar Reinhard Buss instead sees the Klabautermann as an amalgamation of early and pre-Christian beliefs mixed with new creatures. Literary references In August Kopisch's poem Klabautermann, the poet take literary license to embellish the kalabutermann as a violin-fiddling and dancing gay-spirited musician. Georg Engel, Hann Klüth, in his novel der Philosoph (1905) has the character Malljohann witnessing a giggling and hand-clapping klabautermann arising out of water. The maritime sprite has also appeared in the literary works of Friedrich Gerstäcker, Theodor Storm, and later, Christian Morgenstern Klabund, a portmanteau of Klabautermann and Vagabund ('vagabond') was the adopted pen name of writer Alfred Henschke (1890–1928). In the United States, Henry Wadsworth Longfellow wrote "The Musician's Tale: The Ballad of the Carmilhan" in Tales of a Wayside Inn (1863), in which the "Klaboterman" appears to the crew of the doomed ship Valdemar, saving only the honest cabin boy. Sculptural depictions Several Klabautermann sculptures have been publicly installed. A Klabautermann water fountain built by Hermann Joachim Heinrich Pagels [de] (cf. fig. right) was placed in the schoolyard of Pestalozzischule Bremerhaven [de] (i.e., the Pestalozzianum foundation's school at Bremerhaven) in 1912, but is now relocated near the German Maritime Museum, Bremerhaven. A bronze sculpture by Walter Rössler [de] (d. 1996) stands at the Nordfriesland Museum Nissenhaus [de] (cf. above). In popular entertainment Explanatory notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Superbase_database#History] | [TOKENS: 1226] |
Contents Superbase (database) Superbase is an end-user desktop database program that started on the Commodore 64 and was ported from that to various operating systems over the course of more than 20 years. It also has generally included a programming language to automate database-oriented tasks, and with later versions included WYSIWYG form and report designers as well as more sophisticated programming capabilities. History It was originally created in 1983 by Precision Software for the Commodore 64 and 128 and later the Amiga and Atari ST. In 1989, it was the first database management system to run on a Windows computer. Precision Software, a UK-based company, was the original creator of the product Superbase. Superbase was and still is used by a large number of people on various platforms. It was often used only as an end-user database but a very large number of applications were built throughout industry, government, and academia and these were often of significant complexity. Some of these applications continue in use to the current day, mostly in small businesses. The initial versions were text mode only, but with the release of the Amiga version, Superbase became the first product to use the now common VCR control panel for browsing through records.[citation needed] It also supported a number of different media formats, including images, sounds, and video. Superbase was often referred to as the multimedia database in early years, when such features were uncommon. The Amiga version also featured an internal language and the capability to generate front end "masks" for queries and reports, years before Microsoft Access. This version was a huge success and that resulted in a version being created for a number of platforms using the same approach. Eventually a Microsoft Windows version was released and a couple of years later the company was sold by its founders to Software Publishing Corporation. SPC sold off the non-Windows versions of the product and after releasing version 2 and in the late alpha stages of version 3, sold the product to a company called Computer Concepts Corporation. This relatively unknown company created a subsidiary called Superbase, Inc. and after finishing off the late stage alpha of version 3 and launching it as Superbase 95, eventually appeared to have lost interest in the product, at which point it was bought by a small group of former customers and brought back to the UK. This company, Superbase Developers plc, continued to extend and support the product through Superbase Classic. The Amiga version was sold to Mr. Hardware Computers. Joe Rothman developed and renamed the program to SBase Pro 4. Mr Hardware Computers and SBase Pro 4 were sold to Russ Norrby who put out version 1.36n being the newest version. A new, next-generation rewrite of the product initially called Superbase Next Generation (SBNG) which included a new object oriented programming language called SIMPOL was begun in 1999–2000. It had primarily been an alpha product; although it was billed as a beta release in 2005 with promises that a true release would be around the corner. In 2006, SIMPOL was sold to RealBasics Ltd which was later renamed Simpol Ltd (www.simpol.com). In April 2009 this company launched SIMPOL Professional, which is the next generation product, as a cross-platform language and database tool set. In February 2009, it was announced that Superbase Developers plc was in liquidation. In March 2010 Papatuo Holdings Ltd. purchased the Superbase family of products from the official receivers of Superbase Developers plc. In 2014, Pap Holdings (formerly Papatuo Holdings) the company that purchased the Superbase intellectual property when Superbase Developers plc was liquidated in 2010 also purchased the SIMPOL intellectual property upon the liquidation of Simpol Limited. Following versions, 1.83 through 2.06, version 2.10 was released in July 2017. In August 2018, Superbase Software Limited released a free for non-commercial version. Since the passing of a lead developer, the project has been on hold, but the developers are working on version 3.0. Uses Superbase has been used for very basic end-user tasks, but its real strength lies in the ability of relatively untrained programmers to create complex applications. These are typically built up over time as the need arises. The types of applications run the gamut from accounting systems, ERP/MRP packages, business information systems, production control systems, and similar complex products down to very basic membership list or contact management systems. Features It contains a high-speed versatile ISAM database engine and its own powerful dialect of BASIC, as well as sophisticated forms and report engines. It also includes powerful support for acting as the front-end for one or more SQL databases. Its biggest drawback is the fact that it was written to the 16-bit Windows API and was not easily portable to the 32-bit version. The Next Generation rewrite intended to cure that and has made the package even easier to use and more powerful. From a casual programmer's perspective, the fact that the database is not based on SQL is a significant advantage, since the level of complexity is far less and it is easier for the user to grasp the concepts of how to manage and traverse the database. There are numerous powerful features in the product, a few of them are: Versions Reception Shay Addams of Ahoy! in 1984 stated that Superbase had "numerous advanced features seldom seen in a database manager for the C-64", including the database programming language. It concluded that "anyone planning on harnessing the C-64 in an office or business environment can't go wrong with SuperBase". "I found little to dislike in Superbase 2", Richard O. Mann of Compute! wrote in 1992. He said that the software used the Windows UI well; despite wishing for better documentation, Mann concluded that "the program is a good example of what Windows applications are all about". References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Meta_Platforms#cite_note-37] | [TOKENS: 8626] |
Contents Meta Platforms Meta Platforms, Inc. (doing business as Meta) is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, WhatsApp, Messenger, Threads and Manus. The company also operates an advertising network for its own sites and third parties; as of 2023[update], advertising accounted for 97.8 percent of its total revenue. Meta has been described as a part of Big Tech, which refers to the largest six tech companies in the United States, Alphabet (Google), Amazon, Apple, Meta (Facebook), Microsoft, and Nvidia, which are also the largest companies in the world by market capitalization. The company was originally established in 2004 as TheFacebook, Inc., and was renamed Facebook, Inc. in 2005. In 2021, it rebranded as Meta Platforms, Inc. to reflect a strategic shift toward developing the metaverse—an interconnected digital ecosystem spanning virtual and augmented reality technologies. In 2023, Meta was ranked 31st on the Forbes Global 2000 list of the world's largest public companies. As of 2022, it was the world's third-largest spender on research and development, with R&D expenses totaling US$35.3 billion. History Facebook filed for an initial public offering (IPO) on January 1, 2012. The preliminary prospectus stated that the company sought to raise $5 billion, had 845 million monthly active users, and a website accruing 2.7 billion likes and comments daily. After the IPO, Zuckerberg would retain 22% of the total shares and 57% of the total voting power in Facebook. Underwriters valued the shares at $38 each, valuing the company at $104 billion, the largest valuation yet for a newly public company. On May 16, one day before the IPO, Facebook announced it would sell 25% more shares than originally planned due to high demand. The IPO raised $16 billion, making it the third-largest in US history (slightly ahead of AT&T Mobility and behind only General Motors and Visa). The stock price left the company with a higher market capitalization than all but a few U.S. corporations—surpassing heavyweights such as Amazon, McDonald's, Disney, and Kraft Foods—and made Zuckerberg's stock worth $19 billion. The New York Times stated that the offering overcame questions about Facebook's difficulties in attracting advertisers to transform the company into a "must-own stock". Jimmy Lee of JPMorgan Chase described it as "the next great blue-chip". Writers at TechCrunch, on the other hand, expressed skepticism, stating, "That's a big multiple to live up to, and Facebook will likely need to add bold new revenue streams to justify the mammoth valuation." Trading in the stock, which began on May 18, was delayed that day due to technical problems with the Nasdaq exchange. The stock struggled to stay above the IPO price for most of the day, forcing underwriters to buy back shares to support the price. At the closing bell, shares were valued at $38.23, only $0.23 above the IPO price and down $3.82 from the opening bell value. The opening was widely described by the financial press as a disappointment. The stock set a new record for trading volume of an IPO. On May 25, 2012, the stock ended its first full week of trading at $31.91, a 16.5% decline. On May 22, 2012, regulators from Wall Street's Financial Industry Regulatory Authority announced that they had begun to investigate whether banks underwriting Facebook had improperly shared information only with select clients rather than the general public. Massachusetts Secretary of State William F. Galvin subpoenaed Morgan Stanley over the same issue. The allegations sparked "fury" among some investors and led to the immediate filing of several lawsuits, one of them a class action suit claiming more than $2.5 billion in losses due to the IPO. Bloomberg estimated that retail investors may have lost approximately $630 million on Facebook stock since its debut. S&P Global Ratings added Facebook to its S&P 500 index on December 21, 2013. On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure". The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough." In November 2016, Facebook announced the Microsoft Windows client of gaming service Facebook Gameroom, formerly Facebook Games Arcade, at the Unity Technologies developers conference. The client allows Facebook users to play "native" games in addition to its web games. The service was closed in June 2021. Lasso was a short-video sharing app from Facebook similar to TikTok that was launched on iOS and Android in 2018 and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down on July 10. In 2018, the Oculus lead Jason Rubin sent his 50-page vision document titled "The Metaverse" to Facebook's leadership. In the document, Rubin acknowledged that Facebook's virtual reality business had not caught on as expected, despite the hundreds of millions of dollars spent on content for early adopters. He also urged the company to execute fast and invest heavily in the vision, to shut out HTC, Apple, Google and other competitors in the VR space. Regarding other players' participation in the metaverse vision, he called for the company to build the "metaverse" to prevent their competitors from "being in the VR business in a meaningful way at all". In May 2019, Facebook founded Libra Networks, reportedly to develop their own stablecoin cryptocurrency. Later, it was reported that Libra was being supported by financial companies such as Visa, Mastercard, PayPal and Uber. The consortium of companies was expected to pool in $10 million each to fund the launch of the cryptocurrency coin named Libra. Depending on when it would receive approval from the Swiss Financial Market Supervisory authority to operate as a payments service, the Libra Association had planned to launch a limited format cryptocurrency in 2021. Libra was renamed Diem, before being shut down and sold in January 2022 after backlash from Swiss government regulators and the public. During the COVID-19 pandemic, the use of online services, including Facebook, grew globally. Zuckerberg predicted this would be a "permanent acceleration" that would continue after the pandemic. Facebook hired aggressively, growing from 48,268 employees in March 2020 to more than 87,000 by September 2022. Following a period of intense scrutiny and damaging whistleblower leaks, news started to emerge on October 21, 2021 about Facebook's plan to rebrand the company and change its name. In the Q3 2021 earnings call on October 25, Mark Zuckerberg discussed the ongoing criticism of the company's social services and the way it operates, and pointed to the pivoting efforts to building the metaverse – without mentioning the rebranding and the name change. The metaverse vision and the name change from Facebook, Inc. to Meta Platforms was introduced at Facebook Connect on October 28, 2021. Based on Facebook's PR campaign, the name change reflects the company's shifting long term focus of building the metaverse, a digital extension of the physical world by social media, virtual reality and augmented reality features. "Meta" had been registered as a trademark in the United States in 2018 (after an initial filing in 2015) for marketing, advertising, and computer services, by a Canadian company that provided big data analysis of scientific literature. This company was acquired in 2017 by the Chan Zuckerberg Initiative (CZI), a foundation established by Zuckerberg and his wife, Priscilla Chan, and became one of their projects. Following the rebranding announcement, CZI announced that it had already decided to deprioritize the earlier Meta project, thus it would be transferring its rights to the name to Meta Platforms, and the previous project would end in 2022. Soon after the rebranding, in early February 2022, Meta reported a greater-than-expected decline in profits in the fourth quarter of 2021. It reported no growth in monthly users, and indicated it expected revenue growth to stall. It also expected measures taken by Apple Inc. to protect user privacy to cost it some $10 billion in advertisement revenue, an amount equal to roughly 8% of its revenue for 2021. In meeting with Meta staff the day after earnings were reported, Zuckerberg blamed competition for user attention, particularly from video-based apps such as TikTok. The 27% reduction in the company's share price which occurred in reaction to the news eliminated some $230 billion of value from Meta's market capitalization. Bloomberg described the decline as "an epic rout that, in its sheer scale, is unlike anything Wall Street or Silicon Valley has ever seen". Zuckerberg's net worth fell by as much as $31 billion. Zuckerberg owns 13% of Meta, and the holding makes up the bulk of his wealth. According to published reports by Bloomberg on March 30, 2022, Meta turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, a Meta representative said, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse." In June 2022, Sheryl Sandberg, the chief operating officer of 14 years, announced she would step down that year. Zuckerberg said that Javier Olivan would replace Sandberg, though in a “more traditional” role. In March 2022, Meta (except Meta-owned WhatsApp) and Instagram were banned in Russia and added to the Russian list of terrorist and extremist organizations for alleged Russophobia and hate speech (up to genocidal calls) amid the ongoing Russian invasion of Ukraine. Meta appealed against the ban, but it was upheld by a Moscow court in June of the same year. Also in March 2022, Meta and Italian eyewear giant Luxottica released Ray-Ban Stories, a series of smartglasses which could play music and take pictures. Meta and Luxottica parent company EssilorLuxottica declined to disclose sales on the line of products as of September 2022, though Meta has expressed satisfaction with its customer feedback. In July 2022, Meta saw its first year-on-year revenue decline when its total revenue slipped by 1% to $28.8bn. Analysts and journalists accredited the loss to its advertising business, which has been limited by Apple's app tracking transparency feature and the number of people who have opted not to be tracked by Meta apps. Zuckerberg also accredited the decline to increasing competition from TikTok. On October 27, 2022, Meta's market value dropped to $268 billion, a loss of around $700 billion compared to 2021, and its shares fell by 24%. It lost its spot among the top 20 US companies by market cap, despite reaching the top 5 in the previous year. In November 2022, Meta laid off 11,000 employees, 13% of its workforce. Zuckerberg said the decision to aggressively increase Meta's investments had been a mistake, as he had wrongly predicted that the surge in e-commerce would last beyond the COVID-19 pandemic. He also attributed the decline to increased competition, a global economic downturn and "ads signal loss". Plans to lay off a further 10,000 employees began in April 2023. The layoffs were part of a general downturn in the technology industry, alongside layoffs by companies including Google, Amazon, Tesla, Snap, Twitter and Lyft. Starting from 2022, Meta scrambled to catch up to other tech companies in adopting specialized artificial intelligence hardware and software. It had been using less expensive CPUs instead of GPUs for AI work, but that approach turned out to be less efficient. The company gifted the Inter-university Consortium for Political and Social Research $1.3 million to finance the Social Media Archive's aim to make their data available to social science research. In 2023, Ireland's Data Protection Commissioner imposed a record EUR 1.2 billion fine on Meta for transferring data from Europe to the United States without adequate protections for EU citizens.: 250 In March 2023, Meta announced a new round of layoffs that would cut 10,000 employees and close 5,000 open positions to make the company more efficient. Meta revenue surpassed analyst expectations for the first quarter of 2023 after announcing that it was increasing its focus on AI. On July 6, Meta launched a new app, Threads, a competitor to Twitter. Meta announced its artificial intelligence model Llama 2 in July 2023, available for commercial use via partnerships with major cloud providers like Microsoft. It was the first project to be unveiled out of Meta's generative AI group after it was set up in February. It would not charge access or usage but instead operate with an open-source model to allow Meta to ascertain what improvements need to be made. Prior to this announcement, Meta said it had no plans to release Llama 2 for commercial use. An earlier version of Llama was released to academics. In August 2023, Meta announced its permanent removal of news content from Facebook and Instagram in Canada due to the Online News Act, which requires Canadian news outlets to be compensated for content shared on its platform. The Online News Act was in effect by year-end, but Meta will not participate in the regulatory process. In October 2023, Zuckerberg said that AI would be Meta's biggest investment area in 2024. Meta finished 2023 as one of the best-performing technology stocks of the year, with its share price up 150 percent. Its stock reached an all-time high in January 2024, bringing Meta within 2% of achieving $1 trillion market capitalization. In November 2023 Meta Platforms launched an ad-free service in Europe, allowing subscribers to opt-out of personal data being collected for targeted advertising. A group of 28 European organizations, including Max Schrems' advocacy group NOYB, the Irish Council for Civil Liberties, Wikimedia Europe, and the Electronic Privacy Information Center, signed a 2024 letter to the European Data Protection Board (EDPB) expressing concern that this subscriber model would undermine privacy protections, specifically GDPR data protection standards. Meta removed the Facebook and Instagram accounts of Iran's Supreme Leader Ali Khamenei in February 2024, citing repeated violations of its Dangerous Organizations & Individuals policy. As of March, Meta was under investigation by the FDA for alleged use of their social media platforms to sell illegal drugs. On 16 May 2024, the European Commission began an investigation into Meta over concerns related to child safety. In May 2023, Iraqi social media influencer Esaa Ahmed-Adnan encountered a troubling issue when Instagram removed his posts, citing false copyright violations despite his content being original and free from copyrighted material. He discovered that extortionists were behind these takedowns, offering to restore his content for $3,000 or provide ongoing protection for $1,000 per month. This scam, exploiting Meta’s rights management tools, became widespread in the Middle East, revealing a gap in Meta’s enforcement in developing regions. An Iraqi nonprofit Tech4Peace’s founder, Aws al-Saadi helped Ahmed-Adnan and others, but the restoration process was slow, leading to significant financial losses for many victims, including prominent figures like Ammar al-Hakim. This situation highlighted Meta’s challenges in balancing global growth with effective content moderation and protection. On 16 September 2024, Meta announced it had banned Russian state media outlets from its platforms worldwide due to concerns about "foreign interference activity." This decision followed allegations that RT and its employees funneled $10 million through shell companies to secretly fund influence campaigns on various social media channels. Meta's actions were part of a broader effort to counter Russian covert influence operations, which had intensified since the invasion. At its 2024 Connect conference, Meta presented Orion, its first pair of augmented reality glasses. Though Orion was originally intended to be sold to consumers, the manufacturing process turned out to be too complex and expensive. Instead, the company pivoted to producing a small number of the glasses to be used internally. On 4 October 2024, Meta announced about its new AI model called Movie Gen, capable of generating realistic video and audio clips based on user prompts. Meta stated it would not release Movie Gen for open development, preferring to collaborate directly with content creators and integrate it into its products by the following year. The model was built using a combination of licensed and publicly available datasets. On October 31, 2024, ProPublica published an investigation into deceptive political advertisement scams that sometimes use hundreds of hijacked profiles and facebook pages run by organized networks of scammers. The authors cited spotty enforcement by Meta as a major reason for the extent of the issue. In November 2024, TechCrunch reported that Meta were considering building a $10bn global underwater cable spanning 25,000 miles. In the same month, Meta closed down 2 million accounts on Facebook and Instagram that were linked to scam centers in Myanmar, Laos, Cambodia, the Philippines, and the United Arab Emirates doing pig butchering scams. In December 2024, Meta announced that, beginning February 2025, they would require advertisers to run ads about financial services in Australia to verify information about who are the beneficiary and the payer in a bid to regulate scams. On December 4, 2024, Meta announced it will invest US$10 billion for its largest AI data center in northeast Louisiana, powered by natural gas facilities. On the 11th of that month, Meta experienced a global outage, impacting accounts on all of their social media and messaging applications. Outage reports from DownDetector reached 70,000+ and 100,000+ within minutes for Instagram and Facebook, respectively. In January 2025, Meta announced plans to roll back its diversity, equity, and inclusion (DEI) initiatives, citing shifts in the "legal and policy landscape" in the United States following the 2024 presidential election. The decision followed reports that CEO Mark Zuckerberg sought to align the company more closely with the incoming Trump administration, including changes to content moderation policies and executive leadership. The new content moderation policies continued to bar insults about a person's intellect or mental illness, but made an exception to allow calling LGBTQ people mentally ill because they are gay or transgender. Later that month, Meta agreed to pay $25 million to settle a 2021 lawsuit brought by Donald Trump for suspending his social media accounts after the January 6 riots. Changes to Meta's moderation policies were controversial among its oversight board, with a significant divide in opinion between the board's US conservatives and its global members. In June 2025, Meta Platforms Inc. has decided to make a multibillion-dollar investment into artificial intelligence startup Scale AI. The financing could exceed $10 billion in value which would make it one of the largest private company funding events of all time. In October 2025, it was announced that Meta would be laying off 600 employees in the artificial intelligence unit to perform better and simpler. They referred to their AI unit as "bloated" and are seeking to trim down the department. This mass layoff is going to impact Meta’s AI infrastructure units, Fundamental Artificial Intelligence Research unit (FAIR) and other product-related positions. Mergers and acquisitions Meta has acquired multiple companies (often identified as talent acquisitions). One of its first major acquisitions was in April 2012, when it acquired Instagram for approximately US$1 billion in cash and stock. In October 2013, Facebook, Inc. acquired Onavo, an Israeli mobile web analytics company. In February 2014, Facebook, Inc. announced it would buy mobile messaging company WhatsApp for US$19 billion in cash and stock. The acquisition was completed on October 6. Later that year, Facebook bought Oculus VR for $2.3 billion in cash and stock, which released its first consumer virtual reality headset in 2016. In late November 2019, Facebook, Inc. announced the acquisition of the game developer Beat Games, responsible for developing one of that year's most popular VR games, Beat Saber. In Late 2022, after Facebook Inc rebranded to Meta Platforms Inc, Oculus was rebranded to Meta Quest. In May 2020, Facebook, Inc. announced it had acquired Giphy for a reported cash price of $400 million. It will be integrated with the Instagram team. However, in August 2021, UK's Competition and Markets Authority (CMA) stated that Facebook, Inc. might have to sell Giphy, after an investigation found that the deal between the two companies would harm competition in display advertising market. Facebook, Inc. was fined $70 million by CMA for deliberately failing to report all information regarding the acquisition and the ongoing antitrust investigation. In October 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Giphy was divested to Shutterstock for $53 million. In November 2020, Facebook, Inc. announced that it planned to purchase the customer-service platform and chatbot specialist startup Kustomer to promote companies to use their platform for business. It has been reported that Kustomer valued at slightly over $1 billion. The deal was closed in February 2022 after regulatory approval. In September 2022, Meta acquired Lofelt, a Berlin-based haptic tech startup. In December 2025, it was announced Meta had acquired the AI-wearables startup, Limitless. In the same month, they also acquired another AI startup, Manus AI, for $2 billion. Manus announced in December that its platform had achieved $100mm in recurring revenue just 8 months after its launch and Meta said it will scale the platform to many other businesses. In January 2026, it was announced Meta proposed acquisition of Manus was undergoing preliminary scrutiny by Chinese regulators. The examination concerns the cross-border transfer of artificial intelligence technology developed in China. Lobbying In 2020, Facebook, Inc. spent $19.7 million on lobbying, hiring 79 lobbyists. In 2019, it had spent $16.7 million on lobbying and had a team of 71 lobbyists, up from $12.6 million and 51 lobbyists in 2018. Facebook was the largest spender of lobbying money among the Big Tech companies in 2020. The lobbying team includes top congressional aide John Branscome, who was hired in September 2021, to help the company fend off threats from Democratic lawmakers and the Biden administration. In December 2024, Meta donated $1 million to the inauguration fund for then-President-elect Donald Trump. In 2025, Meta was listed among the donors funding the construction of the White House State Ballroom. Partnerships February 2026, Meta announced a long-term partnership with Nvidia. Censorship In August 2024, Mark Zuckerberg sent a letter to Jim Jordan indicating that during the COVID-19 pandemic the Biden administration repeatedly asked Meta to limit certain COVID-19 content, including humor and satire, on Facebook and Instagram. In 2016 Meta hired Jordana Cutler, formerly an employee at the Israeli Embassy to the United States, as its policy chief for Israel and the Jewish Diaspora. In this role, Cutler pushed for the censorship of accounts belonging to Students for Justice in Palestine chapters in the United States. Critics have said that Cutler's position gives the Israeli government an undue influence over Meta policy, and that few countries have such high levels of contact with Meta policymakers. Following the election of Donald Trump in 2025, various sources noted possible censorship related to the Democratic Party on Instagram and other Meta platforms. In February 2025, a Meta rep flagged journalist Gil Duran's article and other "critiques of tech industry figures" as spam or sensitive content, limiting their reach. In March 2025, Meta attempted to block former employee Sarah Wynn-Williams from promoting or further distributing her memoir, Careless People, that includes allegations of unaddressed sexual harassment in the workplace by senior executives. The New York Times reports that the arbitration is among Meta's most forcible attempts to repudiate a former employee's account of workplace dynamics. Publisher Macmillan reacted to the ruling by the Emergency International Arbitral Tribunal by stating that it will ignore its provisions. As of 15 March 2025[update], hardback and digital versions of Careless People were being offered for sale by major online retailers. From October 2025, Meta began removing and restricting access for accounts related to LGBTQ, reproductive health and abortion information pages on its platforms. Martha Dimitratou, executive director of Repro Uncensored, called Meta's shadow-banning of these issues "One of the biggest waves of censorship we are seeing". Disinformation concerns Since its inception, Meta has been accused of being a host for fake news and misinformation. In the wake of the 2016 United States presidential election, Zuckerberg began to take steps to eliminate the prevalence of fake news, as the platform had been criticized for its potential influence on the outcome of the election. The company initially partnered with ABC News, the Associated Press, FactCheck.org, Snopes and PolitiFact for its fact-checking initiative; as of 2018, it had over 40 fact-checking partners across the world, including The Weekly Standard. A May 2017 review by The Guardian found that the platform's fact-checking initiatives of partnering with third-party fact-checkers and publicly flagging fake news were regularly ineffective, and appeared to be having minimal impact in some cases. In 2018, journalists working as fact-checkers for the company criticized the partnership, stating that it had produced minimal results and that the company had ignored their concerns. In 2024 Meta's decision to continue to disseminate a falsified video of US president Joe Biden, even after it had been proven to be fake, attracted criticism and concern. In January 2025, Meta ended its use of third-party fact-checkers in favor of a user-run community notes system similar to the one used on X. While Zuckerberg supported these changes, saying that the amount of censorship on the platform was excessive, the decision received criticism by fact-checking institutions, stating that the changes would make it more difficult for users to identify misinformation. Meta also faced criticism for weakening its policies on hate speech that were designed to protect minorities and LGBTQ+ individuals from bullying and discrimination. While moving its content review teams from California to Texas, Meta changed their hateful conduct policy to eliminate restrictions on anti-LGBT and anti-immigrant hate speech, as well as explicitly allowing users to accuse LGBT people of being mentally ill or abnormal based on their sexual orientation or gender identity. In January 2025, Meta faced significant criticism for its role in removing LGBTQ+ content from its platforms, amid its broader efforts to address anti-LGBTQ+ hate speech. The removal of LGBTQ+ themes was noted as part of the wider crackdown on content deemed to violate its community guidelines. Meta's content moderation policies, which were designed to combat harmful speech and protect users from discrimination, inadvertently led to the removal or restriction of LGBTQ+ content, particularly posts highlighting LGBTQ+ identities, support, or political issues. According to reports, LGBTQ+ posts, including those that simply celebrated pride or advocated for LGBTQ+ rights, were flagged and removed for reasons that some critics argue were vague or inconsistently applied. Many LGBTQ+ activists and users on Meta's platforms expressed concern that such actions stifled visibility and expression, potentially isolating LGBTQ+ individuals and communities, especially in spaces that were historically important for outreach and support. Lawsuits Numerous lawsuits have been filed against the company, both when it was known as Facebook, Inc., and as Meta Platforms. In March 2020, the Office of the Australian Information Commissioner (OAIC) sued Facebook, for significant and persistent infringements of the rule on privacy involving the Cambridge Analytica fiasco. Every violation of the Privacy Act is subject to a theoretical cumulative liability of $1.7 million. The OAIC estimated that a total of 311,127 Australians had been exposed. On December 8, 2020, the U.S. Federal Trade Commission and 46 states (excluding Alabama, Georgia, South Carolina, and South Dakota), the District of Columbia and the territory of Guam, launched Federal Trade Commission v. Facebook as an antitrust lawsuit against Facebook. The lawsuit concerns Facebook's acquisition of two competitors—Instagram and WhatsApp—and the ensuing monopolistic situation. FTC alleges that Facebook holds monopolistic power in the U.S. social networking market and seeks to force the company to divest from Instagram and WhatsApp to break up the conglomerate. William Kovacic, a former chairman of the Federal Trade Commission, argued the case will be difficult to win as it would require the government to create a counterfactual argument of an internet where the Facebook-WhatsApp-Instagram entity did not exist, and prove that harmed competition or consumers. In November 2025, it was ruled that Meta did not violate antitrust laws and holds no monopoly in the market. On December 24, 2021, a court in Russia fined Meta for $27 million after the company declined to remove unspecified banned content. The fine was reportedly tied to the company's annual revenue in the country. In May 2022, a lawsuit was filed in Kenya against Meta and its local outsourcing company Sama. Allegedly, Meta has poor working conditions in Kenya for workers moderating Facebook posts. According to the lawsuit, 260 screeners were declared redundant with confusing reasoning. The lawsuit seeks financial compensation and an order that outsourced moderators be given the same health benefits and pay scale as Meta employees. In June 2022, 8 lawsuits were filed across the U.S. over the allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. The litigation follows a former Facebook employee's testimony in Congress that the company refused to take responsibility. The company noted that tools have been developed for parents to keep track of their children's activity on Instagram and set time limits, in addition to Meta's "Take a break" reminders. In addition, the company is providing resources specific to eating disorders as well as developing AI to prevent children under the age of 13 signing up for Facebook or Instagram. In June 2022, Meta settled a lawsuit with the US Department of Justice. The lawsuit, which was filed in 2019, alleged that the company enabled housing discrimination through targeted advertising, as it allowed homeowners and landlords to run housing ads excluding people based on sex, race, religion, and other characteristics. The U.S. Department of Justice stated that this was in violation of the Fair Housing Act. Meta was handed a penalty of $115,054 and given until December 31, 2022, to shadow the algorithm tool. In January 2023, Meta was fined €390 million for violations of the European Union General Data Protection Regulation. In May 2023, the European Data Protection Board fined Meta a record €1.2 billion for breaching European Union data privacy laws by transferring personal data of Facebook users to servers in the U.S. In July 2024, Meta agreed to pay the state of Texas US$1.4 billion to settle a lawsuit brought by Texas Attorney General Ken Paxton accusing the company of collecting users' biometric data without consent, setting a record for the largest privacy-related settlement ever obtained by a state attorney general. In October 2024, Meta Platforms faced lawsuits in Japan from 30 plaintiffs who claimed they were defrauded by fake investment ads on Facebook and Instagram, featuring false celebrity endorsements. The plaintiffs are seeking approximately $2.8 million in damages. In April 2025, the Kenyan High Court ruled that a US$2.4 billion lawsuit in which three plaintiffs claim that Facebook inflamed civil violence in Ethiopia in 2021 could proceed. In April 2025, Meta was fined €200 million ($230 million) for breaking the Digital Markets Act, by imposing a “consent or pay” system that forces users to either allow their personal data to be used to target advertisements, or pay a subscription fee for advertising-free versions of Facebook and Instagram. In late April 2025, a case was filed against Meta in Ghana over the alleged psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse. Meta moved the moderation service to the Ghanaian capital of Accra after legal issues in the previous location Kenya. The new moderation company is Teleperformance, a multinational corporation with a history of worker's rights violation. Reports suggests the conditions are worse here than in the previous Kenyan location, with many workers afraid of speaking out due to fear of returning to conflict zones. Workers reported developing mental illnesses, attempted suicides, and low pay. In 26 January 2026, a New Mexico state court case was filed, suggesting that Mark Zuckerberg approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions. In 2020, the company UReputation, which had been involved in several cases concerning the management of digital armies[clarification needed], filed a lawsuit against Facebook, accusing it of unlawfully transmitting personal data to third parties. Legal actions were initiated in Tunisia, France, and the United States. In 2025, the United States District court for the Northern District of Georgia approved a discovery procedure, allowing UReputation to access documents and evidence held by Meta. Structure Meta's key management consists of: As of October 2022[update], Meta had 83,553 employees worldwide. As of June 2024[update], Meta's board consisted of the following directors; Meta Platforms is mainly owned by institutional investors, who hold around 80% of all shares. Insiders control the majority of voting shares. The three largest individual investors in 2024 were Mark Zuckerberg, Sheryl Sandberg and Christopher K. Cox. The largest shareholders in late 2024/early 2025 were: Roger McNamee, an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company". Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. In an op-ed in The New York Times, Hughes said he was concerned that Zuckerberg had surrounded himself with a team that did not challenge him, and that it is the U.S. government's job to hold him accountable and curb his "unchecked power". He also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agreed with Hughes. European Union Commissioner for Competition Margrethe Vestager stated that splitting Facebook should be done only as "a remedy of the very last resort", and that it would not solve Facebook's underlying problems. Revenue Facebook ranked No. 34 in the 2020 Fortune 500 list of the largest United States corporations by revenue, with almost $86 billion in revenue most of it coming from advertising. One analysis of 2017 data determined that the company earned US$20.21 per user from advertising. According to New York, since its rebranding, Meta has reportedly lost $500 billion as a result of new privacy measures put in place by companies such as Apple and Google which prevents Meta from gathering users' data. In February 2015, Facebook announced it had reached two million active advertisers, with most of the gain coming from small businesses. An active advertiser was defined as an entity that had advertised on the Facebook platform in the last 28 days. In March 2016, Facebook announced it had reached three million active advertisers with more than 70% from outside the United States. Prices for advertising follow a variable pricing model based on auctioning ad placements, and potential engagement levels of the advertisement itself. Similar to other online advertising platforms like Google and Twitter, targeting of advertisements is one of the chief merits of digital advertising compared to traditional media. Marketing on Meta is employed through two methods based on the viewing habits, likes and shares, and purchasing data of the audience, namely targeted audiences and "look alike" audiences. The U.S. IRS challenged the valuation Facebook used when it transferred IP from the U.S. to Facebook Ireland (now Meta Platforms Ireland) in 2010 (which Facebook Ireland then revalued higher before charging out), as it was building its double Irish tax structure. The case is ongoing and Meta faces a potential fine of $3–5bn. The U.S. Tax Cuts and Jobs Act of 2017 changed Facebook's global tax calculations. Meta Platforms Ireland is subject to the U.S. GILTI tax of 10.5% on global intangible profits (i.e. Irish profits). On the basis that Meta Platforms Ireland Limited is paying some tax, the effective minimum US tax for Facebook Ireland will be circa 11%. In contrast, Meta Platforms Inc. would incur a special IP tax rate of 13.125% (the FDII rate) if its Irish business relocated to the U.S. Tax relief in the U.S. (21% vs. Irish at the GILTI rate) and accelerated capital expensing, would make this effective U.S. rate around 12%. The insignificance of the U.S./Irish tax difference was demonstrated when Facebook moved 1.5bn non-EU accounts to the U.S. to limit exposure to GDPR. Facilities Users outside of the U.S. and Canada contract with Meta's Irish subsidiary, Meta Platforms Ireland Limited (formerly Facebook Ireland Limited), allowing Meta to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Meta is making use of the Double Irish arrangement which allows it to pay 2–3% corporation tax on all international revenue. In 2010, Facebook opened its fourth office, in Hyderabad, India, which houses online advertising and developer support teams and provides support to users and advertisers. In India, Meta is registered as Facebook India Online Services Pvt Ltd. It also has offices or planned sites in Chittagong, Bangladesh; Dublin, Ireland; and Austin, Texas, among other cities. Facebook opened its London headquarters in 2017 in Fitzrovia in central London. Facebook opened an office in Cambridge, Massachusetts in 2018. The offices were initially home to the "Connectivity Lab", a group focused on bringing Internet access to those who do not have access to the Internet. In April 2019, Facebook opened its Taiwan headquarters in Taipei. In March 2022, Meta opened new regional headquarters in Dubai. In September 2023, it was reported that Meta had paid £149m to British Land to break the lease on Triton Square London office. Meta reportedly had another 18 years left on its lease on the site. As of 2023, Facebook operated 21 data centers. It committed to purchase 100% renewable energy and to reduce its greenhouse gas emissions 75% by 2020. Its data center technologies include Fabric Aggregator, a distributed network system that accommodates larger regions and varied traffic patterns. Reception US Representative Alexandria Ocasio-Cortez responded in a tweet to Zuckerberg's announcement about Meta, saying: "Meta as in 'we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society ... for profit!'" Ex-Facebook employee Frances Haugen and whistleblower behind the Facebook Papers responded to the rebranding efforts by expressing doubts about the company's ability to improve while led by Mark Zuckerberg, and urged the chief executive officer to resign. In November 2021, a video published by Inspired by Iceland went viral, in which a Zuckerberg look-alike promoted the Icelandverse, a place of "enhanced actual reality without silly looking headsets". In a December 2021 interview, SpaceX and Tesla chief executive officer Elon Musk said he could not see a compelling use-case for the VR-driven metaverse, adding: "I don't see someone strapping a frigging screen to their face all day." In January 2022, Louise Eccles of The Sunday Times logged into the metaverse with the intention of making a video guide. She wrote: Initially, my experience with the Oculus went well. I attended work meetings as an avatar and tried an exercise class set in the streets of Paris. The headset enabled me to feel the thrill of carving down mountains on a snowboard and the adrenaline rush of climbing a mountain without ropes. Yet switching to the social apps, where you mingle with strangers also using VR headsets, it was at times predatory and vile. Eccles described being sexually harassed by another user, as well as "accents from all over the world, American, Indian, English, Australian, using racist, sexist, homophobic and transphobic language". She also encountered users as young as 7 years old on the platform, despite Oculus headsets being intended for users over 13. See also References External links 37°29′06″N 122°08′54″W / 37.48500°N 122.14833°W / 37.48500; -122.14833 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Hans_Reissner] | [TOKENS: 421] |
Contents Hans Reissner Hans Jacob Reissner, also known as Jacob Johannes Reissner (18 January 1874, Berlin – 2 October 1967, Mt. Angel, Oregon), was a German aeronautical engineer whose avocation was mathematical physics. During World War I he was awarded the Iron Cross second class (for civilians) for his pioneering work on aircraft design. Biography Reissner was born into a wealthy Berlin family that benefited from an inheritance from his great-uncle on his mother's side. As a young engineering graduate, he spent a year in the U.S. working as a draftsman. After this year, he broadened his academic interests to include physics. As a young academic, he published mathematical papers on engineering problems. Before World War I, Reissner designed the first successful all-metal aircraft, the Reissner Canard (or Ente) with both skin and structure made of metal. This was constructed with assistance from Hugo Junkers who had previously shown little interest in aviation. Both were professors at the University of Aachen. The first flight was made on May 23, 1912, with Robert Gsell at the controls. During the Nazi regime Reissner was able to work in the aircraft industry although he did not have an Aryan certificate. In 1935 he lost his post at Technische Universität Berlin due to his Jewish ancestry, and in 1938 he emigrated to the United States. He taught at the Illinois Institute of Technology (1938–44) and the Polytechnic Institute of Brooklyn (1944–54). It was this engineer, rather than a physicist or mathematician, who first solved Einstein's equation for the metric of a charged point mass. His closed-form solution, rediscovered by several other physicists within the next few years, is now called the Reissner–Nordström metric. Eric Reissner (Max Erich Reissner, 1913–1996), his son, developed Mindlin–Reissner plate theory. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft:_The_Unlikely_Tale_of_Markus_%22Notch%22_Persson_and_the_Game_That_Changed_Everything] | [TOKENS: 334] |
Contents Minecraft (book) Minecraft: The Unlikely Tale of Markus "Notch" Persson and the Game That Changed Everything is a book written by Daniel Goldberg and Linus Larsson (and translated by Jennifer Hawkins) about the story of Minecraft and its creator, Markus "Notch" Persson. The book was released on October 17, 2013, and includes many different tips and tricks for the game. Content The book is a biography of Persson that also covers Minecraft's popularity and the Swedish gaming industry. The book describes how Persson was inspired by games like Dungeon Keeper, Dwarf Fortress, and Infiniminer, and how he was convinced that he was onto something big from the very beginning. It also described how Persson documented the development openly and in continual dialogue with other players. Publication history The book was first published by Norstedts förlag in Sweden in 2012, under the title Minecraft: block, pixlar och att göra sig en hacka (ISBN 9789113049250). An English translation by Jennifer Hawkins was published in 2013 by Seven Stories Press, who claimed that it was the first book about Minecraft to be written in that language. Reception The book was described by John Biggs of TechCrunch as "beautifully human". Its portrayal of Persson was praised by Publishers Weekly as "moving" and by Paul Stenis in Library Journal as "compelling", but was criticized by Nick Kolakowski of the Washington Independent Review of Books for being too shallow. References This article about a non-fiction book on video games is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-1upcontroller-175] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:Articles_needing_additional_references_from_August_2024] | [TOKENS: 94] |
Category:Articles needing additional references from August 2024 This category combines all articles needing additional references from August 2024 (2024-08) to enable us to work through the backlog more systematically. It is a member of Category:Articles needing additional references. Pages in category "Articles needing additional references from August 2024" The following 200 pages are in this category, out of approximately 3,953 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Maor_Farid#cite_note-9] | [TOKENS: 1458] |
Contents Maor Farid Dr. Maor Farid (Hebrew: מאור פריד; born April 20, 1992) is an Israeli scientist, engineer and artificial intelligence researcher at Massachusetts Institute of Technology, social activist, and author. He is the founder and CEO of Learn to Succeed (Hebrew: ללמוד להצליח) for empowering of youths from the Israeli socio-economic periphery and youths at risk, a regional manager of the Israeli center of ScienceAbroad at MIT, and an activist in the American Technion Society. He is an alumnus of Unit 8200, and a fellow of Fulbright Program and the Israel Scholarship Educational Foundation [he]. Dr. Farid was elected to the Forbes 30 Under 30 list of 2019, and won the Moskowitz Prize for Zionism. Early life Maor was born in Ness Ziona, a city in central Israel, as the eldest son for parents from immigrating families of Mizrahi Jews from Iraq and Libya. Maor suffered from Attention deficit hyperactivity disorder (ADHD) from a young age, and was classified as a problematic and violent student. His ADHD issues were diagnosed only after he began his university studies. However, inspired by his parents' background, he aspired to excel at school for a better future for his family. During elementary school, Maor attended local quizzes about Jewish history and Zionism, which significantly shaped his identity and national perspective. Farid graduated high school with the highest GPA in school. Later he was recruited to the Israel Defense Forces and drafted to the Brakim Program [he] – an excellence program of the Israeli Intelligence Corps for training leading R&D officers for the Israeli military and defense industry. Maor graduated the program with honors and was elected by the Israeli Prime Minister's Office and Unit 8200, where he served as an artificial intelligence researcher, officer, and commander. During his Military service, he received various honors and awards, such as the Excellent Scientist Award, given to the top three academics serving in the Israel Defense Forces. In 2019, Farid completed his military service in the rank of a Captain. Education and academic career As part of the (4 years) Brakim Program, Maor completed his Bachelor's and Master's degrees at the Technion in Mechanical Engineering with honors. Then, he initiated his Ph.D. research as a collaboration with the Israel Atomic Energy Commission (IAEC) in parallel to his duty military service. The main goals of his Ph.D. research were predicting irreversible effects of major earthquakes on Israel's nuclear facilities, and improving their seismic resistance using energy absorption technologies. The mathematical models developed by Farid were able to forecast earthquake effects on facilities with major hazard potential, and predicted the failure of liquid storage tanks due to earthquakes took place in Italy (2012) and Mexico (2017). The energy absorption technologies used, increased in up to 90% the seismic resistance abilities of those sensitive facilities. The research results were published in multiple papers in peer-reviewed academic journals and presented in international academic conferences. Later, this research expanded to an official collaboration between the Technion and the Shimon Peres Negev Nuclear Research Center, which aims to implement the findings obtained on existing sensitive systems, and won funding of 1.5 million NIS from the Pazy foundation of the Israel Atomic Energy Commission and the Council for Higher Education. In 2017, Farid completed his Ph.D. and as the youngest graduate at the Technion for that year, at the age of 24. In the graduation ceremonies, he honored his parents to receive the diplomas on his behalf. At the same year, he served as a lecturer at Ben-Gurion University in an original course he developed as a solution for knowledge gaps he identified in the Israeli defense industry. In 2018, Dr. Farid served as an artificial intelligence researcher at a Data Science team of Unit 8200, where he developed machine learning-based solutions for military and operational needs. In 2019, Farid won the Fulbright and the Israel Scholarship Educational Foundation scholarships, and was accepted to post-doctoral position at Massachusetts Institute of Technology where he develops real-time methods for predicting earthquake effects using machine learning techniques. In 2020, Farid was accepted to the Emerging Leaders Program at Harvard Kennedy School in Cambridge, Massachusetts. At the same year, he received the excellence research grant of the Israel Academy of Sciences and Humanities for leading his research in collaboration between MIT and the Technion. Social activism Farid social activism focuses on empowering youths from disadvantaged backgrounds from an early age. In 2010–2015, he served as a mentor of a robotics team from Dimona in FIRST Robotics Competition, a mathematics tutor in "Aharai!" [he] program for high-school students at risk in Dimona and Be'er Sheva, and a mentor and private tutor of adolescence and reserve duty soldiers from disadvantaged backgrounds. In 2010, he initiated "Learn to Succeed" (Hebrew: ללמוד להצליח) project, for mitigating the social gaps in the Israeli society by empowering youths from the social, economical, and geographical periphery for excellence, self-fulfillment and gaining formal education. In 2018, Learn to Succeed became an official non-profit organization. At the same year, Farid led a crowdfunding project of 150,000 NIS in order to expand the organization to a national scale. In 2019, he published the book "Learn to Succeed", in which he describes his struggle with ADHD, the violent environment in which he grew up, and the changing process he went through from being a violent teenager to becoming the youngest Ph.D. graduate at the Technion. The book was given to more than two thousand youths at risk and became a top seller in Israel shortly after its publication. Maor dedicated the book to his parents and to the memorial of his friend Captain Tal Nachman who was killed in operational activity during his military service in 2014. The organization consists of hundreds of volunteers, gives full scholarships to STEM students from the periphery who serve as mentors of youths, both Jews and Arabs, from disadvantaged backgrounds, runs a hotline which gives online practical and mental support to hundreds of youths, parents and educators, initiates inspirational activities with military orientation to increase the motivation of its teen-age members for significant military service, and gives inspirational lectures to more than 5,000 youths each year. In 2019, Maor initiated a collaboration with Unit 8200 in which tens of the program's members are being interviewed to the unit. This opportunity is usually given to students with the highest grades in the matriculate exams in each class. In 2020, Dr. Farid established the ScienceAbroad center at MIT, aiming to strengthen the connections between Israeli researchers in the institute and the state of Israel. Moreover, he serves as a volunteer in the American Technion Society. Honors and awards Personal life Farid is married to Michal. Interviews and articles References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/S%26P_Global_Ratings] | [TOKENS: 2316] |
Contents S&P Global Ratings S&P Global Ratings (previously Standard & Poor's and informally known as S&P) is an American credit rating agency (CRA) and a division of S&P Global that publishes financial research and analysis on stocks, bonds, and commodities. S&P is considered the largest of the Big Three credit-rating agencies, which also include Moody's Ratings and Fitch Ratings. Its head office is located on 55 Water Street in Lower Manhattan, New York City. Corporate history The company traces its history back to 1860, with the publication by Henry Varnum Poor of History of Railroads and Canals in the United States. This book compiled comprehensive information about the financial and operational state of U.S. railroad companies. In 1868, Henry Varnum Poor established H.V. and H.W. Poor Co. with his son, Henry William Poor, and published two annually updated hardback guidebooks, Poor's Manual of the Railroads of the United States and Poor's Directory of Railway Officials. In 1906, Luther Lee Blake founded the Standard Statistics Bureau, with the view to providing financial information on non-railroad companies. Instead of an annually published book, Standard Statistics would use 5-by-7-inch cards, allowing for more frequent updates. In 1941, Paul Talbot Babson purchased Poor's Publishing and merged it with Standard Statistics to become Standard & Poor's Corp. In 1966, the company was acquired by The McGraw-Hill Companies, extending McGraw-Hill into the field of financial information services. Credit ratings As a credit rating agency (CRA), the company issues credit ratings for the debt of public and private companies, and public borrowers such as governments, governmental agencies, and cities. It is one of several CRAs that have been designated a nationally recognized statistical rating organization (NRSRO) by the U.S. Securities and Exchange Commission. S&P rates borrowers on a scale from AAA to D. Intermediate ratings are offered at each level between AA and CCC (such as BBB+, BBB, and BBB−). For some borrowers issuances, the company may also offer guidance (termed a "credit watch") as to whether it is likely to be upgraded (positive), downgraded (negative) or stable. Investment Grade Non-Investment Grade (also known as speculative-grade) The company rates specific issues on a scale from A-1 to D. Within the A-1 category, it can be designated with a plus sign (+). This indicates that the issuer's commitment to meet its obligation is very strong. Country risk and currency of repayment of the obligor to meet the issue obligation are factored into the credit analysis and reflected in the issue rating. Governance scores S&P has had a variety of approaches to reflecting its opinion of the relative strength of a company's corporate governance practices. Corporate governance serves as investor protection against potential governance-related losses of value, or failure to create value. S&P developed criteria and methodology for assessing corporate governance. It started issuing Corporate Governance Scores (CGS) in 2000. CGS assessed companies' corporate governance practices. They were assigned at the request of the company being assessed, were non-public (although companies were free to disclose them to the public and sometimes did) and were limited to public U.S. corporations. In 2005, S&P stopped issuing CGS. S&P's Governance, Accountability, Management Metrics and Analysis (GAMMA) scores were designed for equity investors in emerging markets and focused on non-financial-risk assessment, and in particular, assessment of corporate governance risk. S&P discontinued providing stand-alone governance scores in 2011, "while continuing to incorporate governance analysis in global and local scale credit ratings". In November 2012, S&P published its criteria for evaluating insurers and non-financial enterprises' management and governance credit factors. These scores are not standalone, but rather a component used by S&P in assessing an enterprise's overall creditworthiness. S&P updated its management and governance scoring methodology as part of a larger effort to include enterprise risk management analysis in its rating of debt issued by non-financial companies. "Scoring of management and governance is made on a scale of weak, fair, satisfactory or strong, depending on the mix of positive and negative management scores and the existence and severity of governance deficiencies." Downgrades of countries On August 5, 2011, following enactment of the Budget Control Act of 2011, S&P lowered the US's sovereign long-term credit rating from AAA to AA+. The press release sent with the decision said, in part: The United States Department of the Treasury, which had first called S&P's attention to its $2 trillion error in calculating the ten-year deficit reduction under the Budget Control Act, commented, "The magnitude of this mistake – and the haste with which S&P changed its principal rationale for action when presented with this error – raise fundamental questions about the credibility and integrity of S&P's ratings action." The following day, S&P acknowledged in writing the US$2 trillion error in its calculations, saying the error "had no impact on the rating decision" and adding: In taking a longer term horizon of 10 years, the U.S. net general government debt level with the current assumptions would be $20.1 trillion (85% of 2021 GDP). With the original assumptions, the debt level was projected to be $22.1 trillion (93% of 2021 GDP). In 2013, the Justice Department charged Standard & Poor's with fraud in a $5 billion lawsuit: U.S. v. McGraw-Hill Cos et al., U.S. District Court, Central District of California, No. 13-00779. Since it did not charge Fitch and Moody's and because the Department did not give access to evidence, there has been speculation whether the lawsuit may have been in retaliation for S&P's decision to downgrade. On April 15, 2013, the Department of Justice was ordered to grant S&P access to evidence. On November 11, 2011, S&P erroneously announced the cut of France's triple-A rating (AAA). French leaders said that the error was inexcusable and called for even more regulation of private credit rating agencies. On January 13, 2012, S&P truly cut France's AAA rating, lowering it to AA+. This was the first time since 1975 that Europe's second-biggest economy, France, had been downgraded to AA+. The same day, S&P downgraded the rating of eight other European countries: Austria, Spain, Italy, Portugal, Malta, Slovenia, Slovakia and Cyprus. Publications The company publishes The Outlook, a weekly investment advisory newsletter for individuals and professional investors, published continuously since 1922. Credit Week is produced by Standard & Poor's Credit Market Services Group. It offers a comprehensive view of the global credit markets, providing credit rating news and analysis. Standard & Poor's offers numerous other editorials, investment commentaries and news updates for financial markets, companies, industries, stocks, bonds, funds, economic outlook and investor education. All publications are available to subscribers. S&P Dow Jones Indices publishes several blogs that do not require a subscription to access. These include Indexology, VIX Views and Housing Views. Criticism and scandal Credit rating agencies such as S&P have been cited for contributing to the 2008 financial crisis. Credit ratings of AAA (the highest rating available) were given to large portions of even the riskiest pools of loans in the collateralized debt obligation (CDO) market. When the real estate bubble burst in 2007, many loans went bad due to falling housing prices and the inability of bad creditors to refinance. Investors who had trusted the AAA rating to mean that CDO were low-risk had purchased large amounts that later experienced staggering drops in value or could not be sold at any price. For example, institutional investors lost $125 million on $340.7 million worth of CDOs issued by Credit Suisse Group, despite being rated AAA by S&P. Companies pay S&P, Moody's, and Fitch to rate their debt issues. As a result, some critics have contended that the credit ratings agencies are beholden to these issuers in a conflict of interests and that their ratings are not as objective as they ought to be, due to this "pay to play" model. In 2015, Standard and Poor's paid $1.5 billion to the U.S. Justice Department, various state governments, and the California Public Employees' Retirement System to settle lawsuits asserting its inaccurate ratings defrauded investors. In April 2009, the company called for "new faces" in the Irish government, which was seen as interfering in the democratic process. In a subsequent statement they said they were "misunderstood". S&P acknowledged making a US$2 trillion error in its justification for downgrading the credit rating of the United States in 2011, but stated that it "had no impact on the rating decision". In November 2012, Judge Jayne Jagot of the Federal Court of Australia found that: "A reasonably competent ratings agency could not have rated the Rembrandt 2006-3 CPDO AAA in these circumstances"; and "S&P’s rating of AAA of the Rembrandt 2006-2 and 2006-3 CPDO notes was misleading and deceptive and involved the publication of information or statements false in material particulars and otherwise involved negligent misrepresentations to the class of potential investors in Australia, which included Local Government Financial Services Pty Ltd and the councils, because by the AAA rating there was conveyed a representation that in S&P’s opinion the capacity of the notes to meet all financial obligations was “extremely strong” and a representation that S&P had reached this opinion based on reasonable grounds and as the result of an exercise of reasonable care when neither was true and S&P also knew not to be true at the time made." In conclusion, Jagot found Standard & Poor's to be jointly liable along with ABN Amro and Local Government Financial Services Pty Ltd. Antitrust review In November 2009, ten months after launching an investigation, the European Commission (EC) formally charged S&P with abusing its position as the sole provider of international securities identification codes for United States of America securities by requiring European financial firms and data vendors to pay licensing fees for their use. "This behavior amounts to unfair pricing," the EC said in its statement of objections which lays the groundwork for an adverse finding against S&P. "The (numbers) are indispensable for a number of operations that financial institutions carry out – for instance, reporting to authorities or clearing and settlement – and cannot be substituted.” S&P has run the CUSIP Service Bureau, the only International Securities Identification Number (ISIN) issuer in the US, on behalf of the American Bankers Association. In its formal statement of objections, the EC alleged "that S&P is abusing this monopoly position by enforcing the payment of licence fees for the use of US ISINs by (a) banks and other financial services providers in the EEA and (b) information service providers in the EEA." It claims that comparable agencies elsewhere in the world either do not charge fees at all, or do so on the basis of distribution cost, rather than usage. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Synthesia_(company)] | [TOKENS: 1069] |
Contents Synthesia (company) Synthesia Limited is a British multinational artificial intelligence company based in London, United Kingdom, that develops software used to create AI-generated video content and synthetic media. Overview Synthesia is most often used by corporations for communication, orientation, and training videos. It has been used in advertising campaigns, reporting, product demonstrations, and to create chatbots. Synthesia's software algorithm mimics speech and facial movements based on video recordings of an individual's speech and phoneme pronunciation. From this a text-to-speech video is created to look and sound like the individual. Users create content via the platform's pre-generated AI presenters or by creating digital representations of themselves, or personal avatars, using the platform's AI video editing tool. These avatars can be used to narrate videos generated from text. As of August 2021, Synthesia's voice database included multiple gender options in over sixty languages. The platform prohibits use of its software to create non-consensual clones, including of celebrities or political figures for satirical purposes. Explicit consent must be provided in addition to a strict pre-screening regimen for use of an individual's likeness to avoid “deepfaking”. While the company prohibits use of its technology for misinformation or "news-like content", an October 2023 Freedom House report stated that Synthesia tools had been used by governments in Venezuela, China, Burkina Faso, and Russia to create videos of fake TV news outlets with AI-generated avatars in order to spread propaganda. Actor Dan Dewhirst signed a contract with the company in 2021, becoming one of the first actors whose likeness would be made into an AI avatar, finding his likeness used in the Venezuelan generated-videos. The company stated, in February 2024, that it had improved its misuse detection systems, and, in April 2024, that new users of its technology are screened by the company, and content employing it is further vetted by Synthesia moderators. History Synthesia's software utilizes deep learning architecture developed by Lourdes Agapito and Matthias Niessner. The company was co-founded in 2017 by Agapito, Niessner, Victor Riparbelli, and Steffen Tjerrild. In 2018, the company first demonstrated the software's capabilities on the BBC programme Click when it presented a digitization of Matthew Amroliwala speaking Spanish, Mandarin, and Hindi. Through Synthesia's first two years of existence, it employed ten people and struggled to make sales, leading to an expansion of the company's focus. It moved on from just targeting entertainment studios to a variety of businesses. In 2020, Synthesia users were reported to include Amazon, Tiffany & Co. and IHG Hotels & Resorts. In January 2024, the company introduced its AI video assistant, which turns text-to-video. That April, with a reported 55,000 customers, including half of the Fortune 100, Synthesia launched "expressive avatars". Peter Hill joined Synthesia as CTO in January 2025, following 25 years at Amazon, and two years as CEO and CPO of Wildfire Studios. Synthesia raised $3.1 million in seed funding in 2019. In April 2021, the company raised $12.5 million in Series A funding. In December 2021, it raised $50 million in a Series B funding round led by Kleiner Perkins and GV. Synthesia gained a total valuation of $1 billion, and achieved unicorn status, when it raised $90 million from Accel and Nvidia partnership NVentures, in June 2023, during its Series C funding round. Counting 60,000 customers the following January, including over 60% of Fortune 100 companies; the company raised $180 million in a Series D round led by NEA, with new investors World Innovation Lab (WiL), Atlassian Ventures and PSP Growth, as well as existing investors GV, MMC Ventures and FirstMark, doubling Synthesia's valuation to $2.1 billion. Capital raised to date reached $330 million in 2025, with 2025 investments slated to further product innovation, talent growth, and company expansion in North America, Europe, Japan and Australia. In April 2025, Adobe Inc. invested "an undisclosed amount of funds" in Synthesia for a “strategic” partnership. Around that time, Synthesia rejected an acquisition offer from Adobe that reportedly valued the company at about $3 billion, due to disagreements over pricing. After Synthesia raised in $200 million funding round led by GV, the company's valuation rose to approximately $4 billion in October 2025. Recognition In 2021, Synthesia partnered with Lay's to create the Messi Messages campaign featuring Argentine footballer Lionel Messi. Users created personalized messages with Synthesia's software and sent custom artificial reality video messages from Messi based on their text input. The campaign received a Cannes Lion Award under the Bronze category. In February 2025, UK Science and Technology Minister Peter Kyle commended Synthesia's "pioneering generative AI innovations." References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Workplace_impact_of_artificial_intelligence] | [TOKENS: 2960] |
Contents Workplace impact of artificial intelligence The impact of artificial intelligence on workers includes both applications to improve worker safety and health, and potential hazards that must be controlled. One potential application is using AI to eliminate hazards by removing humans from hazardous situations that involve risk of stress, overwork, or musculoskeletal injuries. Predictive analytics may also be used to identify conditions that may lead to hazards such as fatigue, repetitive strain injuries, or toxic substance exposure, leading to earlier interventions. Another is to streamline workplace safety and health workflows through automating repetitive tasks, enhancing safety training programs through virtual reality, or detecting and reporting near misses. When used in the workplace, AI also presents the possibility of new hazards. These may arise from machine learning techniques leading to unpredictable behavior and inscrutability in their decision-making, or from cybersecurity and information privacy issues. Many hazards of AI are psychosocial due to its potential to cause changes in work organization. These include changes in the skills required of workers, increased monitoring leading to micromanagement, algorithms unintentionally or intentionally mimicking undesirable human biases, and assigning blame for machine errors to the human operator instead. AI may also lead to physical hazards in the form of human–robot collisions, and ergonomic risks of control interfaces and human–machine interactions. Hazard controls include cybersecurity and information privacy measures, communication and transparency with workers about data usage, and limitations on collaborative robots. From a workplace safety and health perspective, only "weak" or "narrow" AI that is tailored to a specific task is relevant, as there are many examples that are currently in use or expected to come into use in the near future. "Strong" or "general" AI is not expected to be feasible in the near future,[according to whom?] and discussion of its risks is within the purview of futurists and philosophers rather than industrial hygienists. Certain digital technologies are predicted to result in job losses. Starting in the 2020s, the adoption of modern robotics has led to net employment growth. However, many businesses anticipate that automation, or employing robots would result in job losses in the future. This is especially true for companies in Central and Eastern Europe. Other digital technologies, such as platforms or big data, are projected to have a more neutral impact on employment. A large number of tech workers have been laid off starting in 2023; many such job cuts have been attributed to artificial intelligence. The long-term predicted impact of AI on the workplace remains highly contested. Various academic studies have theorised the impact of AI on the workplace. A 2025 investigation based on users' interactions with Microsoft's AI chatbot, Copilot, identified forty jobs that had high overlaps with the capabilities of AI. The report concluded that these jobs - which included Interpreters and Translators, Historians, Passenger Attendants, Sales Assistants, and Writers - would thus experience significant transformation in the workplace by AI. The report garnered high levels of attention in the media, with some outlets claiming these jobs would become obsolete. However, some of the listed professions criticised the report, suggesting that it had misrepresented their typical workplace activities in order to augment AI's current performance. The historian Chris Campbell argued that the 'report’s methods, deliberately or otherwise, de-skill historians away from a job that requires high-level and deeply human analytical skills to one that is tasked solely with the retention and provision of knowledge. Under that flawed rubric, it is little wonder that historians have a high AI applicability score.' Health and safety applications In order for any potential AI health and safety application to be adopted, it requires acceptance by both managers and workers. For example, worker acceptance may be diminished by concerns about information privacy, or from a lack of trust and acceptance of the new technology, which may arise from inadequate transparency or training.: 26–28, 43–45 Alternatively, managers may emphasize increases in economic productivity rather than gains in worker safety and health when implementing AI-based systems. AI may increase the scope of work tasks where a worker can be removed from a situation that carries risk. In a sense, while traditional automation can replace the functions of a worker's body with a robot, AI effectively replaces the functions of their brain with a computer. Hazards that can be avoided include stress, overwork, musculoskeletal injuries, and boredom.: 5–7 This can expand the range of affected job sectors into white-collar and service sector jobs such as in medicine, finance, and information technology. As an example, call center workers face extensive health and safety risks due to its repetitive and demanding nature and its high rates of micro-surveillance. AI-enabled chatbots lower the need for humans to perform the most basic call center tasks.: 5–7 Machine learning is used for people analytics to make predictions about worker behavior to assist management decision-making, such as hiring and performance assessment. These could also be used to improve worker health. The analytics may be based on inputs such as online activities, monitoring of communications, location tracking, and voice analysis and body language analysis of filmed interviews. For example, sentiment analysis may be used to spot fatigue to prevent overwork.: 3–7 Decision support systems have a similar ability to be used to, for example, prevent industrial disasters or make disaster response more efficient. For manual material handling workers, predictive analytics and artificial intelligence may be used to reduce musculoskeletal injury. Traditional guidelines are based on statistical averages and are geared towards anthropometrically typical humans. The analysis of large amounts of data from wearable sensors may allow real-time, personalized calculation of ergonomic risk and fatigue management, as well as better analysis of the risk associated with specific job roles. Wearable sensors may also enable earlier intervention against exposure to toxic substances than is possible with area or breathing zone testing on a periodic basis. Furthermore, the large data sets generated could improve workplace health surveillance, risk assessment, and research. AI has also been used to attempt to make the workplace safety and health workflow more efficient. One example is coding of workers' compensation claims, which are submitted in a prose narrative form and must manually be assigned standardized codes. AI is being investigated[by whom?] to perform this task faster, more cheaply, and with fewer errors. AI‐enabled virtual reality systems may be useful for safety training for hazard recognition. Artificial intelligence may be used to more efficiently detect near misses. Reporting and analysis of near misses are important in reducing accident rates, but they are often underreported because they are not noticed by humans, or are not reported by workers due to social factors. Hazards There are several broad aspects of AI that may give rise to specific hazards. The risks depend on implementation rather than the mere presence of AI.: 2–3 Systems using sub-symbolic AI such as machine learning may behave unpredictably and are more prone to inscrutability in their decision-making. This is especially true if a situation is encountered that was not part of the AI's training dataset, and is exacerbated in environments that are less structured. Undesired behavior may also arise from flaws in the system's perception (arising either from within the software or from sensor degradation), knowledge representation and reasoning, or from software bugs.: 14–18 They may arise from improper training, such as a user applying the same algorithm to two problems that do not have the same requirements.: 12–13 Machine learning applied during the design phase may have different implications than that applied at runtime. Systems using symbolic AI are less prone to unpredictable behavior.: 14–18 The use of AI also increases cybersecurity risks relative to platforms that do not use AI,: 17 and information privacy concerns about collected data may pose a hazard to workers. Psychosocial hazards are those that arise from the way work is designed, organized, and managed, or its economic and social contexts, rather than arising from a physical substance or object. They cause not only psychiatric and psychological outcomes such as occupational burnout, anxiety disorders, and depression, but they can also cause physical injury or illness such as cardiovascular disease or musculoskeletal injury. Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization, in terms of increasing complexity and interaction between different organizational factors. However, psychosocial risks are often overlooked by designers of advanced manufacturing systems. Einola and Khoreva explore how different organizational groups perceive and interact with AI technologies. Their research shows that successful AI integration depends on human ownership and contextual understanding. They caution against blind technological optimism and stress the importance of tailoring AI use to specific workplace ecosystems. This perspective reinforces the need for inclusive design and transparent implementation strategies. AI is expected to lead to changes in the skills required of workers, requiring training of existing workers, flexibility, and openness to change. The requirement for combining conventional expertise with computer skills may be challenging for existing workers. Over-reliance on AI tools may lead to deskilling of some professions. While AI offers convenience and judgement-free interaction, increased reliance—particularly among Generation Z—may reduce interpersonal communication in the workplace and affect social cohesion. As AI becomes a substitute for traditional peer collaboration and mentorship, there is a risk of diminishing opportunities for interpersonal skill development and team-based learning. This shift could contribute to workplace isolation and changes in team dynamics. Increased monitoring may lead to micromanagement and thus to stress and anxiety. A perception of surveillance may also lead to stress. Controls for these include consultation with worker groups, extensive testing, and attention to introduced bias. Wearable sensors, activity trackers, and augmented reality may also lead to stress from micromanagement, both for assembly line workers and gig workers. Gig workers also lack the legal protections and rights of formal workers.: 2–10 AI is not merely a technical tool but a transformative force that reshapes workplace structures and decision-making processes.[citation needed] Newell and Marabelli argue that AI alters power dynamics and employee autonomy, requiring a more nuanced understanding of its social and organizational implications. Their study calls for thoughtful integration of AI that considers its broader impact on work culture and human roles. There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours.: 5–7 Algorithms trained on past decisions may mimic undesirable human biases, for example, past discriminatory hiring and firing practices. Information asymmetry between management and workers may lead to stress, if workers do not have access to the data or algorithms that are the basis for decision-making.: 3–5 In addition to building a model with inadvertently discriminatory features, intentional discrimination may occur through designing metrics that covertly result in discrimination through correlated variables in a non-obvious way.: 12–13 In complex human‐machine interactions, some approaches to accident analysis may be biased to safeguard a technological system and its developers by assigning blame to the individual human operator instead. Physical hazards in the form of human–robot collisions may arise from robots using AI, especially collaborative robots (cobots). Cobots are intended to operate in close proximity to humans, which makes impossible the common hazard control of isolating the robot using fences or other barriers, which is widely used for traditional industrial robots. Automated guided vehicles are a type of cobot that as of 2019 are in common use, often as forklifts or pallet jacks in warehouses or factories.: 5, 29–30 For cobots, sensor malfunctions or unexpected work environment conditions can lead to unpredictable robot behavior and thus to human–robot collisions.: 5–7 Self-driving cars are another example of AI-enabled robots. In addition, the ergonomics of control interfaces and human–machine interactions may give rise to hazards. Hazard controls AI, in common with other computational technologies, requires cybersecurity measures to stop software breaches and intrusions,: 17 as well as information privacy measures. Communication and transparency with workers about data usage is a control for psychosocial hazards arising from security and privacy issues. Proposed best practices for employer‐sponsored worker monitoring programs include using only validated sensor technologies; ensuring voluntary worker participation; ceasing data collection outside the workplace; disclosing all data uses; and ensuring secure data storage. For industrial cobots equipped with AI‐enabled sensors, the International Organization for Standardization (ISO) recommended: (a) safety‐related monitored stopping controls; (b) human hand guiding of the cobot; (c) speed and separation monitoring controls; and (d) power and force limitations. Networked AI-enabled cobots may share safety improvements with each other. Human oversight is another general hazard control for AI.: 12–13 Risk management Both applications and hazards arising from AI can be considered as part of existing frameworks for occupational health and safety risk management. As with all hazards, risk identification is most effective and least costly when done in the design phase. Workplace health surveillance, the collection and analysis of health data on workers, is challenging for AI because labor data are often reported in aggregate and does not provide breakdowns between different types of work, and is focused on economic data such as wages and employment rates rather than skill content of jobs. Proxies for skill content include educational requirements and classifications of routine versus non-routine, and cognitive versus physical jobs. However, these may still not be specific enough to distinguish specific occupations that have distinct impacts from AI. The United States Department of Labor's Occupational Information Network is an example of a database with a detailed taxonomy of skills. Additionally, data are often reported on a national level, while there is much geographical variation, especially between urban and rural areas. AI systems in the workplace raise ethical concerns related to privacy, fairness, human dignity, and transparency. According to the OECD, these risks must be addressed through robust governance frameworks and accountability mechanisms. Ethical deployment of AI requires clear policies on data usage, explainability of algorithms, and safeguards against discrimination and surveillance. Standards and regulation As of 2019[update], ISO was developing a standard on the use of metrics and dashboards, information displays presenting company metrics for managers, in workplaces. The standard is planned to include guidelines for both gathering data and displaying it in a viewable and useful manner.: 11 In the European Union, the General Data Protection Regulation, while oriented towards consumer data, is also relevant for workplace data collection. Data subjects, including workers, have "the right not to be subject to a decision based solely on automated processing". Other relevant EU directives include the Machinery Directive (2006/42/EC), the Radio Equipment Directive (2014/53/EU), and the General Product Safety Directive (2001/95/EC).: 10, 12–13 See also References |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.