text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Markus_Persson#cite_note-rubydung-19] | [TOKENS: 3525] |
Contents Markus Persson Markus Alexej Persson (/ˈpɪərsən/ ⓘ PEER-sən, Swedish: [ˈmǎrːkɵs ˈpæ̌ːʂɔn] ⓘ; born 1 June 1979), known by the pseudonym Notch, is a Swedish video game programmer and designer. He is the creator of Minecraft, the best-selling video game in history. He founded the video game development company Mojang Studios in 2009. Persson began developing video games at an early age. His commercial success began after he published an early version of Minecraft in 2009. Prior to the game's official retail release in 2011, it had sold over four million copies. After this point Persson stood down as the lead designer and transferred his creative authority to Jens Bergensten. In September 2014 Persson announced his intention to leave Mojang, and in November of that year the company was sold to Microsoft reportedly for US$2.5 billion, which made him a billionaire. Since 2016 several of Persson's posts on Twitter regarding feminism, race, and transgender rights have caused public controversies. He has been described as "an increasingly polarizing figure, tweeting offensive statements regarding race, the LGBTQ community, gender, and other topics." In an effort to distance itself from Persson, Microsoft removed mentions of his name from Minecraft (excluding one instance in the game's end credits) and did not invite him to the game's tenth anniversary celebration. In 2015 he co-founded a separate game studio called Rubberbrain, which was relaunched in 2024 as Bitshift Entertainment. Early life Markus Alexej Persson was born in Stockholm, Sweden, to a Finnish mother, Ritva, and a Swedish father, Birger, on 1 June 1979. He has one sister. He grew up in Edsbyn until he was seven years old, when his family moved back to Stockholm. In Edsbyn, Persson's father worked for the railroad, and his mother was a nurse. He spent much time outdoors in Edsbyn, exploring the woods with his friends. When Persson was about seven years old, his parents divorced, and he and his sister lived with their mother. His father moved to a cabin in the countryside. Persson said in an interview that they experienced food insecurity around once a month. Persson lost contact with his father for several years after the divorce. According to Persson, his father suffered from depression, bipolar disorder, alcoholism, and medication abuse, and went to jail for robberies. While his father had somewhat recovered during Persson's early life, his father relapsed, contributing to the divorce. His sister also experimented with drugs and ran away from home. He had gained interest in video games at an early age. His father was "a really big nerd", who built his own modem and taught Persson to use the family's Commodore 128. On it, Persson played bootleg games and loaded in various type-in programs from computer magazines with the help of his sister. The first game he purchased with his own money was The Bard's Tale. He began programming on his father's Commodore 128 home computer at the age of seven. He produced his first game at the age of eight, a text-based adventure game. By 1994 Persson knew he wanted to become a video game developer, but his teachers advised him to study graphic design, which he did from ages 15 to 18. Persson, although introverted, was well-liked by his peers, but after entering secondary school was a "loner" and reportedly had only one friend. He spent most of his spare time with games and programming at home. He managed to reverse-engineer the Doom engine, which he continued to take great pride in as of 2014[update]. He never finished high school, but was reportedly a good student. Career Persson started his career working as a web designer. He later found employment at Game Federation, where he met Rolf Jansson. The pair worked in their spare time to build the 2006 video game Wurm Online. The game was released through a new entity, "Mojang Specifications AB". Persson left the project in late 2007. As Persson wanted to reuse the name "Mojang", Jansson agreed to rename the company to Onetoofree AB. Between 2004 and 2009 Persson worked as a game developer for Midasplayer (later known as King). There, he worked as a programmer, mostly building browser games made in Flash. He later worked as a programmer for jAlbum. Prior to creating Minecraft, Persson developed multiple, small games. He also entered a number of game design competitions and participated in discussions on the TIGSource forums, a web forum for independent game developers. One of Persson's more notable personal projects was called RubyDung, an isometric three-dimensional base-building game like RollerCoaster Tycoon and Dwarf Fortress. While working on RubyDung, Persson experimented with a first-person view mode similar to that found in Dungeon Keeper. However, he felt the graphics were too pixelated and omitted this mode. In 2009 Persson found inspiration in Infiniminer, a block-based open-ended mining game. Infiniminer heavily influenced his future work on RubyDung, and was behind Persson's reasoning for returning the first-person mode, the "blocky" visual style and the block-building fundamentals to the game. RubyDung is the earliest known Minecraft prototype created by Persson. On 17 May 2009 Persson released the original edition (later called "Classic version") of Minecraft on the TIGSource forums. He regularly updated the game based on feedback from TIGSource users. Persson released several new versions of Minecraft throughout 2009 and 2010, going through several phases of development including Survival Test, Indev, and Infdev. On 30 June 2010 Persson released the game's Alpha version. While working on the pre-Alpha version of Minecraft, Persson continued working at jAlbum. In 2010, after the release and subsequent success of Minecraft's Alpha version, Persson moved from a full-time role to a part-time role at jAlbum. He left jAlbum later that same year. In September 2010 Persson travelled to Valve Corporation's headquarters in Bellevue, Washington, United States, where he took part in a programming exercise and met Gabe Newell. Persson was subsequently offered a job at Valve, which he turned down in order to continue work on Minecraft. On 20 December 2010 Minecraft moved into its beta phase and began expanding to other platforms, including mobile. In January 2011 Minecraft reached one million registered accounts. Six months afterwards, it reached ten million. The game has sold over four million copies by 7 November 2011. Mojang held the first Minecon from 18 to 19 November 2011 to celebrate its full release, and subsequently made it an annual event. Following this, on 11 December 2011, Persson transferred creative control of Minecraft to Jens Bergensten and began working on another game title, 0x10c, although he reportedly abandoned the project around 2013. In 2013 Mojang recorded revenues of $330 million and profits of $129 million. Persson has stated that, due to the intense media attention and public pressure, he became exhausted with running Minecraft and Mojang. In a September 2014 blog post he shared his realization that he "didn't have the connection to my fans I thought I had", that he had "become a symbol", and that he did not wish to be responsible for Mojang's increasingly large operation. In June 2014 Persson tweeted "Anyone want to buy my share of Mojang so I can move on with my life? Getting hate for trying to do the right thing is not my gig", reportedly partly as a joke. Persson controlled a 71% stake in Mojang at the time. The offer attracted significant interest from Activision Blizzard, EA, and Microsoft. Forbes later reported that Microsoft wanted to purchase the game as a "tax dodge" to turn their taxable excess liquid cash into other assets. In September 2014 Microsoft agreed to purchase Mojang for $2.5 billion, making Persson a billionaire. He then left the company after the deal was finalised in November. Since leaving Mojang, Persson has worked on several small projects. On 23 June 2014 he founded a company with Porsér called Rubberbrain AB; the company had no games by 2021, despite spending SEK 60 million. The company was relaunched as Bitshift Entertainment, LLC on 28 March 2024. Persson expressed interest in creating a new video game studio in 2020, and in developing virtual reality games. He has also since created a series of narrative-driven immersive events called ".party()", which uses extensive visual effects and has been hosted in multiple cities. At the beginning of 2025 Persson decided to create a spiritual successor to Minecraft, referred to as "Minecraft 2", in response to the results of a poll on X. However, after speaking to his team, he shortly went against this in favour of developing the other choice on his Twitter poll, a roguelike titled Levers and Chests. Games Persson's most popular creation is the survival sandbox game Minecraft, which was first publicly available on 17 May 2009 and fully released on 18 November 2011. Persson left his job as a game developer to work on Minecraft full-time until completion. In early 2011, Mojang AB sold the one millionth copy of the game, several months later their second, and several more their third. Mojang hired several new staff members for the Minecraft team, while Persson passed the lead developer role to Jens Bergensten. He stopped working on Minecraft after a deal with Microsoft to sell Mojang for $2.5 billion. This brought his net worth to US$1.5 billion. Persson and Jakob Porsér came up with the idea for Scrolls including elements from board games and collectible card games. Persson noted that he will not be actively involved in development of the game and that Porsér will be developing it. Persson revealed on his Tumblr blog on 5 August 2011 that he was being sued by a Swedish law firm representing Bethesda Softworks over the trademarked name of Scrolls, claiming that it conflicted with their The Elder Scrolls series of games. On 17 August 2011 Persson challenged Bethesda to a Quake 3 tournament to decide the outcome of the naming dispute. On 27 September 2011 Persson confirmed that the lawsuit was going to court. ZeniMax Media, owner of Bethesda Softworks, announced the lawsuit's settlement in March 2012. The settlement allowed Mojang to continue using the Scrolls trademark. In 2018, Scrolls was made available free of charge and renamed to Caller's Bane. Cliffhorse is a humorous game programmed in two hours using the Unity game engine and free assets. The game took inspiration from Skyrim's physics engine, "the more embarrassing minimum-effort Greenlight games", Goat Simulator, and Big Rigs: Over the Road Racing. The game was released to Microsoft Windows systems as an early access and honourware game on the first day of E3 2014, instructing users to donate Dogecoin to "buy" the game before downloading it. The game accumulated over 280,000 dogecoins. Following the end to his involvement with Minecraft, Persson began pre-production of an alternate reality space game set in the distant future in March 2012. On April Fools' Day Mojang launched a satirical website for Mars Effect (parody of Mass Effect), citing the lawsuit with Bethesda as an inspiration. However, the gameplay elements remained true and on 4 April, Mojang revealed 0x10c (pronounced "Ten to the C") as a space sandbox title. Persson officially halted game production in August 2013. However, C418, the composer of the game's soundtrack (as well as that of Minecraft), released an album of the work he had made for the game. In 2013, Persson made a free game called Shambles in the Unity game engine. Persson has also participated in several Ludum Dare 48-hour game making competitions. Personal life In 2011 Persson married Elin Zetterstrand, whom he had dated for four years before. Zetterstrand was a former moderator on the Minecraft forums. They had a daughter together, but by mid-2012, he began to see little of her. On 15 August 2012 he announced that he and his wife had filed for divorce. The divorce was finalised later that year. On 14 December 2011 Persson's father committed suicide with a handgun after drinking heavily. In an interview with The New Yorker, Persson said of his father: When I decided I wanted to quit my day job and work on my own games, he was the only person who supported my decision. He was proud of me and made sure I knew. When I added the monsters to Minecraft, he told me that the dark caves became too scary for him. But I think that was the only true criticism I ever heard from him. Persson later admitted that he himself suffered from depression and various highs and lows in his mood. Persson has criticised the stance of large game companies on piracy. He once stated that "piracy is not theft", viewing unauthorised downloads as potential future customers. Persson stated himself to be a member of the Pirate Party of Sweden in 2011. He is also a member of Mensa. He has donated to numerous charities, including Médecins Sans Frontières (Doctors Without Borders). Under his direction, Mojang spent a week developing Catacomb Snatch for the Humble Indie Bundle and raised US$458,248 for charity. He also donated $250,000 to the Electronic Frontier Foundation in 2012. In 2011 he gave $3 million in dividends back to Mojang employees. According to Forbes, his net worth in 2023 was around $1.2 billion. In 2014 Persson was one of the biggest taxpayers in Sweden. Around 2014, he lived in a multi-level penthouse in Östermalm, Stockholm, an area he described as "where the rich people live". In December 2014 Persson purchased a home in Trousdale Estates, a neighbourhood in Beverly Hills, California, in the United States, for $70 million, a record sales price for Beverly Hills at the time. Persson reportedly outbid Beyoncé and Jay-Z for the property. Persson began receiving criticism for political and social opinions he expressed on social media as early as 2016. November 30, 2017 In 2017, he proposed a heterosexual pride holiday, and wrote that those who opposed the idea "deserve to be shot." After facing backlash, he deleted the tweets and rescinded his statements, writing, "So yeah, it's about pride of daring to express, not about pride of being who you are. I get it now." Later in the year, he wrote that feminism is a "social disease" and called the video game developer and feminist Zoë Quinn a "cunt", although he was generally critical of the GamerGate movement. He has described intersectional feminism as a "framework for bigotry" and the use of the word mansplaining as being sexist. Also in 2017, Persson tweeted that "It's okay to be white". Later that year, he stated that he believed in the Pizzagate conspiracy theory. In 2019, he tweeted referencing QAnon, saying "Q is legit. Don't trust the media." Later in 2019, he tweeted in response to a pro-transgender internet meme that, "You are absolutely evil if you want to encourage delusion. What happened to not stigmatizing mental illness?" He then also promoted claims that people were fined for "using the wrong pronoun". However, after facing backlash, he tweeted a day afterwards that he had "no idea what [being trans is] like of course, but it's inspiring as hell when people open up and choose to actually be who they know themselves as. Not because it's a cool choice, because it's a big step. I gues [sic] that's actually cool nvm". Later that year, Microsoft removed two mentions of Persson's name in the "19w13a" snapshot of Minecraft and did not invite him to the 10-year anniversary celebration of the game. A spokesperson for Microsoft stated that his views "do not reflect those of Microsoft or Mojang". He is still mentioned in the End Poem ("a flat, infinite world created by a man called Markus").[citation needed] Awards References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Linux] | [TOKENS: 6858] |
Contents Linux Page version status This is an accepted version of this page Linux (/ˈlɪnʊks/ LIN-uuks) is a group of open source Unix-like operating systems based on the Linux kernel, a kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged as a Linux distribution (distro), which includes the kernel and supporting system software and libraries—most of which are provided by third parties—to create a complete operating system. Linux was originally designed as a clone of Unix and is released under the copyleft GPL license. Thousands of Linux distributions exist, many based directly or indirectly on other distributions; popular Linux distributions include Debian, Fedora Linux, Linux Mint, Arch Linux, and Ubuntu, while commercial distributions include Red Hat Enterprise Linux, SUSE Linux Enterprise, and ChromeOS. Linux distributions are frequently used in server platforms. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses and recommends the name "GNU/Linux" to emphasize the use and importance of GNU software in many distributions, causing some controversy. Other than the Linux kernel, key components that make up a distribution may include a display server (windowing system), a package manager, a bootloader and a Unix shell. Linux is one of the most prominent examples of free and open-source software collaboration. While originally developed for x86-based personal computers, it has since been ported to more platforms than any other operating system, and is used on a wide variety of devices including PCs, workstations, mainframes and embedded systems. Linux is the predominant operating system for servers and is also used on all of the world's 500 fastest supercomputers.[g] When combined with Android, which is Linux-based and designed for smartphones, they have the largest installed base of all general-purpose operating systems. Overview The Linux kernel was designed by Linus Torvalds, following the lack of a working kernel for GNU, a Unix-compatible operating system made entirely of free software that had been undergoing development since 1983 by Richard Stallman. A working Unix system called Minix was later released but its license was not entirely free at the time and it was made for an educative purpose. The first entirely free Unix for personal computers, 386BSD, did not appear until 1992, by which time Torvalds had already built and publicly released the first version of the Linux kernel on the Internet. Like GNU and 386BSD, Linux did not have any Unix code, being a fresh reimplementation, and therefore avoided the legal issues. Linux distributions became popular in the 1990s and effectively made Unix technologies accessible to home users on personal computers whereas previously it had been confined to sophisticated workstations. Desktop Linux distributions include a windowing system such as X11 or Wayland and a desktop environment such as GNOME, KDE Plasma or Xfce. Distributions intended for servers may not have a graphical user interface at all or include a solution stack such as LAMP. The source code of Linux may be used, modified, and distributed commercially or non-commercially by anyone under the terms of its respective licenses, such as the GNU General Public License (GPL). The license means creating novel distributions is permitted by anyone and is easier than it would be for an operating system such as macOS or Microsoft Windows. The Linux kernel, for example, is licensed under the GPLv2, with an exception for system calls that allows code that calls the kernel via system calls not to be licensed under the GPL. Because of the dominance of Linux-based Android on smartphones, Linux, including Android, has the largest installed base of all general-purpose operating systems as of May 2022[update]. Linux is, as of March 2024[update], used by around 4 percent of desktop computers. The Chromebook, which runs the Linux kernel-based ChromeOS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux is the leading operating system on servers (over 96.4% of the top one million web servers' operating systems are Linux), leads other big iron systems such as mainframe computers,[clarification needed] and is used on all of the world's 500 fastest supercomputers[h] (as of November 2017[update], having gradually displaced all competitors). Linux also runs on embedded systems, i.e., devices whose operating system is typically built into the firmware and is highly tailored to the system. This includes routers, automation controls, smart home devices, video game consoles, televisions (Samsung and LG smart TVs), automobiles (Tesla, Audi, Mercedes-Benz, Hyundai, and Toyota), and spacecraft (Falcon 9 rocket, Dragon crew capsule, and the Ingenuity Mars helicopter). History The Unix operating system was conceived of and implemented in 1969, at AT&T's Bell Labs in the United States, by Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna. First released in 1971, Unix was written entirely in assembly language, as was common practice at the time. In 1973, in a key pioneering approach, it was rewritten in the C programming language by Dennis Ritchie (except for some hardware and I/O routines). The availability of a high-level language implementation of Unix made its porting to different computer platforms easier. As a 1956 antitrust case forbade AT&T from entering the computer business, AT&T provided the operating system's source code to anyone who asked. As a result, Unix use grew quickly and it became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of its regional operating companies, and was released from its obligation not to enter the computer business; freed of that obligation, Bell Labs began selling Unix as a proprietary product, where users were not legally allowed to modify it. Onyx Systems began selling early microcomputer-based Unix workstations in 1980. Later, Sun Microsystems, founded as a spin-off of a student project at Stanford University, also began selling Unix-based desktop workstations in 1982. While Sun workstations did not use commodity PC hardware, for which Linux was later originally developed, it represented the first successful commercial attempt at distributing a primarily single-user microcomputer that ran a Unix operating system. With Unix increasingly "locked in" as a proprietary product, the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed entirely of free software. Work began in 1984. Later, in 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License (GNU GPL) in 1989. By the early 1990s, many of the programs required in an operating system (such as libraries, compilers, text editors, a command-line shell, and a windowing system) were completed, although low-level elements such as device drivers, daemons, and the kernel, called GNU Hurd, were stalled and incomplete. Minix was created by Andrew S. Tanenbaum, a computer science professor, and released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn operating system principles. Although the complete source code of Minix was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000. While attending the University of Helsinki in the fall of 1990, Torvalds enrolled in a Unix course. The course used a MicroVAX minicomputer running Ultrix, and one of the required texts was Operating Systems: Design and Implementation by Andrew S. Tanenbaum. This textbook included a copy of Tanenbaum's Minix operating system. It was with this course that Torvalds first became exposed to Unix. In 1991, he became curious about operating systems. Frustrated by the licensing of Minix, which at the time limited it to educational use only, he began to work on his operating system kernel, which eventually became the Linux kernel. On July 3, 1991, to implement Unix system calls, Linus Torvalds attempted unsuccessfully to obtain a digital copy of the POSIX standards documentation with a request to the comp.os.minix newsgroup. After not finding the POSIX documentation, Torvalds initially resorted to determining system calls from SunOS documentation owned by the university for use in operating its Sun Microsystems server. He also learned some system calls from Tanenbaum's Minix text. Torvalds began the development of the Linux kernel on Minix and applications written for Minix were also used on Linux. Later, Linux matured and further Linux kernel development took place on Linux systems. GNU applications also replaced all Minix components, because it was advantageous to use the freely available code from the GNU Project with the fledgling operating system; code licensed under the GNU GPL can be reused in other computer programs as long as they also are released under the same or a compatible license. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, creating a fully functional and free operating system. Although not released until 1992, due to legal complications, the development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Linus Torvalds has stated that if the GNU kernel or 386BSD had been available in 1991, he probably would not have created Linux. Linus Torvalds had wanted to call his invention "Freax", a portmanteau of "free", "freak", and "x" (as an allusion to Unix). During the start of his work on the system, some of the project's makefiles included the name "Freax" for about half a year. Torvalds considered the name "Linux" but dismissed it as too egotistical. To facilitate development, the files were uploaded to the FTP server of FUNET in September 1991. Ari Lemmke, Torvalds' coworker at the Helsinki University of Technology (HUT) who was one of the volunteer administrators for the FTP server at the time, did not think that "Freax" was a good name, so he named the project "Linux" on the server without consulting Torvalds. Later, however, Torvalds consented to "Linux". According to a newsgroup post by Torvalds, the word "Linux" should be pronounced (/ˈlɪnʊks/ ⓘ LIN-uuks) with a short 'i' as in 'print' and 'u' as in 'put'. To further demonstrate how the word "Linux" should be pronounced, he included an audio guide with the kernel source code. However, in this recording, he pronounces Linux as /ˈlinʊks/ (LEEN-uuks) with a short but close front unrounded vowel, instead of a near-close near-front unrounded vowel as in his newsgroup post. The adoption of Linux in production environments, rather than being used only by hobbyists, started to take off first in the mid-1990s in the supercomputing community, where organizations such as NASA started replacing their increasingly expensive machines with clusters of inexpensive commodity computers running Linux. Commercial use began when Dell and IBM, followed by Hewlett-Packard, started offering Linux support to escape Microsoft's monopoly in the desktop operating system market. Today, Linux systems are used throughout computing, from embedded systems to virtually all supercomputers, and have secured a place in server installations such as the popular LAMP application stack. The use of Linux distributions in home and enterprise desktops has been growing. Linux distributions have also become popular in the netbook market, with many devices shipping with customized Linux distributions installed, and Google releasing their own ChromeOS designed for netbooks. Linux's greatest success in the consumer market is perhaps the mobile device market, with Android being the dominant operating system on smartphones and very popular on tablets and, more recently, on wearables, and vehicles. Linux gaming is also on the rise with Valve showing its support for Linux and rolling out SteamOS, its own gaming-oriented Linux distribution, which was later implemented in their Steam Deck platform. Linux distributions have also gained popularity with various local and national governments, such as the federal government of Brazil. Linus Torvalds is the lead maintainer for the Linux kernel and guides its development, while Greg Kroah-Hartman is the lead maintainer for the stable branch. Zoë Kooyman is the executive director of the Free Software Foundation, which in turn supports the GNU components. Finally, individuals and corporations develop third-party non-GNU components. These third-party components comprise a vast body of work and may include both kernel modules and user applications and libraries. Linux vendors and communities combine and distribute the kernel, GNU components, and non-GNU components, with additional package management software in the form of Linux distributions. Design Many developers of open-source software agree that the Linux kernel was not designed but rather evolved through natural selection. Torvalds considers that although the design of Unix served as a scaffolding, "Linux grew with a lot of mutations – and because the mutations were less than random, they were faster and more directed than alpha-particles in DNA." Eric S. Raymond considers Linux's revolutionary aspects to be social, not technical: before Linux, complex software was designed carefully by small groups, but "Linux evolved in a completely different way. From nearly the beginning, it was rather casually hacked on by huge numbers of volunteers coordinating only through the Internet. Quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers." Bryan Cantrill, an engineer of a competing OS, agrees that "Linux wasn't designed, it evolved", but considers this to be a limitation, proposing that some features, especially those related to security, cannot be evolved into, "this is not a biological system at the end of the day, it's a software system." A Linux-based system is a modular Unix-like operating system, deriving much of its basic design from principles established in Unix during the 1970s and 1980s. Such a system uses a monolithic kernel, the Linux kernel, which handles process control, networking, access to the peripherals, and file systems. Device drivers are either integrated directly with the kernel or added as modules that are loaded while the system is running. The GNU userland is a key part of most systems based on the Linux kernel, with Android being a notable exception. The GNU C library, an implementation of the C standard library, works as a wrapper for the system calls of the Linux kernel necessary to the kernel-userspace interface, the toolchain is a broad collection of programming tools vital to Linux development (including the compilers used to build the Linux kernel itself), and the coreutils implement many basic Unix tools. The GNU Project also develops Bash, a popular CLI shell. The graphical user interface (or GUI) used by most Linux systems is built on top of an implementation of the X Window System. More recently, some of the Linux community has sought to move to using Wayland as the display server protocol, replacing X11. Many other open-source software projects contribute to Linux systems. Installed components of a Linux system include the following: The user interface, also known as the shell, is either a command-line interface (CLI), a graphical user interface (GUI), or controls attached to the associated hardware, which is common for embedded systems. For desktop systems, the default user interface is usually graphical, although the CLI is commonly available through terminal emulator windows or on a separate virtual console. CLI shells are text-based user interfaces, which use text for both input and output. The dominant shell used in Linux is the Bourne-Again Shell (bash), originally developed for the GNU Project; other shells such as Zsh are also used. Most low-level Linux components, including various parts of the userland, use the CLI exclusively. The CLI is particularly suited for automation of repetitive or delayed tasks and provides very simple inter-process communication. On desktop systems, the most popular user interfaces are the GUI shells, packaged together with extensive desktop environments, such as KDE Plasma, GNOME, MATE, Cinnamon, LXDE, Pantheon, and Xfce, though a variety of additional user interfaces exist. Most popular user interfaces are based on the X Window System, often simply called "X" or "X11". It provides network transparency and permits a graphical application running on one system to be displayed on another where a user may interact with the application; however, certain extensions of the X Window System are not capable of working over the network. Several X display servers exist, with the reference implementation, X.Org Server, being the most popular. Several types of window managers exist for X11, including tiling, dynamic, stacking, and compositing. Window managers provide means to control the placement and appearance of individual application windows, and interact with the X Window System. Simpler X window managers such as dwm, ratpoison, or i3wm provide a minimalist functionality, while more elaborate window managers such as FVWM, Enlightenment, or Window Maker provide more features such as a built-in taskbar and themes, but are still lightweight when compared to desktop environments. Desktop environments include window managers as part of their standard installations, such as Mutter (GNOME), KWin (KDE), or Xfwm (xfce), although users may choose to use a different window manager if preferred. Wayland is a display server protocol intended as a replacement for the X11 protocol; as of 2022[update], it has received relatively wide adoption. Unlike X11, Wayland does not need an external window manager and compositing manager. Therefore, a Wayland compositor takes the role of the display server, window manager, and compositing manager. Weston is the reference implementation of Wayland, while GNOME's Mutter and KDE's KWin are being ported to Wayland as standalone display servers. Enlightenment has already been successfully ported since version 19. Additionally, many window managers have been made for Wayland, such as Sway or Hyprland, as well as other graphical utilities such as Waybar or Rofi. Linux currently has two modern kernel-userspace APIs for handling video input devices: V4L2 API for video streams and radio, and DVB API for digital TV reception. Due to the complexity and diversity of different devices, and due to the large number of formats and standards handled by those APIs, this infrastructure needs to evolve to better fit other devices. Also, a good userspace device library is the key to the success of having userspace applications to be able to work with all formats supported by those devices. Development The primary difference between Linux and many other popular contemporary operating systems is that the Linux kernel and other components are free and open-source software. Linux is not the only such operating system, although it is by far the most widely used. Some free and open-source software licenses are based on the principle of copyleft, a kind of reciprocity: any work derived from a copyleft piece of software must also be copyleft itself. The most common free software license, the GNU General Public License (GPL), is a form of copyleft and is used for the Linux kernel and many of the components from the GNU Project. Linux-based distributions are intended by developers for interoperability with other operating systems and established computing standards. Linux systems adhere to POSIX, Single UNIX Specification (SUS), Linux Standard Base (LSB), ISO, and ANSI standards where possible, although to date only one Linux distribution has been POSIX.1 certified, Linux-FT. The Open Group has tested and certified at least two Linux distributions as qualifying for the Unix trademark, EulerOS and Inspur K-UX. Free software projects, although developed through collaboration, are often produced independently of each other. The fact that the software licenses explicitly permit redistribution, however, provides a basis for larger-scale projects that collect the software produced by stand-alone projects and make it available all at once in the form of a Linux distribution. Many Linux distributions manage a remote collection of system software and application software packages available for download and installation through a network connection. This allows users to adapt the operating system to their specific needs. Distributions are maintained by individuals, loose-knit teams, volunteer organizations, and commercial entities. A distribution is responsible for the default configuration of the installed Linux kernel, general system security, and more generally integration of the different software packages into a coherent whole. Distributions typically use a package manager such as apt, yum, zypper, pacman or portage to install, remove, and update all of a system's software from one central location. A distribution is largely driven by its developer and user communities. Some vendors develop and fund their distributions on a volunteer basis, Debian being a well-known example. Others maintain a community version of their commercial distributions, as Red Hat does with Fedora, and SUSE does with openSUSE. In many cities and regions, local associations known as Linux User Groups (LUGs) seek to promote their preferred distribution and by extension free software. They hold meetings and provide free demonstrations, training, technical support, and operating system installation to new users. Many Internet communities also provide support to Linux users and developers. Most distributions and free software / open-source projects have IRC chatrooms or newsgroups. Online forums are another means of support, with notable examples being Unix & Linux Stack Exchange, LinuxQuestions.org and the various distribution-specific support and community forums, such as ones for Ubuntu, Fedora, Arch Linux, Gentoo, etc. Linux distributions host mailing lists; commonly there will be a specific topic such as usage or development for a given list. There are several technology websites with a Linux focus. Print magazines on Linux often bundle cover disks that carry software or even complete Linux distributions. Although Linux distributions are generally available without charge, several large corporations sell, support, and contribute to the development of the components of the system and free software. An analysis of the Linux kernel in 2017 showed that well over 85% of the code was developed by programmers who are being paid for their work, leaving about 8.2% to unpaid developers and 4.1% unclassified. Some of the major corporations that provide contributions include Intel, Samsung, Google, AMD, Oracle, and Facebook. Several corporations, notably Red Hat, Canonical, and SUSE have built a significant business around Linux distributions. The free software licenses, on which the various software packages of a distribution built on the Linux kernel are based, explicitly accommodate and encourage commercialization; the relationship between a Linux distribution as a whole and individual vendors may be seen as symbiotic. One common business model of commercial suppliers is charging for support, especially for business users. A number of companies also offer a specialized business version of their distribution, which adds proprietary support packages and tools to administer higher numbers of installations or to simplify administrative tasks. Another business model is to give away the software to sell hardware. This used to be the norm in the computer industry, with operating systems such as CP/M, Apple DOS, and versions of the classic Mac OS before 7.6 freely copyable (but not modifiable). As computer hardware standardized throughout the 1980s, it became more difficult for hardware manufacturers to profit from this tactic, as the OS would run on any manufacturer's computer that shared the same architecture. Most programming languages support Linux either directly or through third-party community based ports. The original development tools used for building both Linux applications and operating system programs are found within the GNU toolchain, which includes the GNU Compiler Collection (GCC) and the GNU Build System. Amongst others, GCC provides compilers for Ada, C, C++, Go and Fortran. Many programming languages have a cross-platform reference implementation that supports Linux, for example PHP, Perl, Ruby, Python, Java, Go, Rust and Haskell. First released in 2003, the LLVM project provides an alternative cross-platform open-source compiler for many languages. Proprietary compilers for Linux include the Intel C++ Compiler, Sun Studio, and IBM XL C/C++ Compiler. BASIC is available in procedural form from QB64, PureBasic, Yabasic, GLBasic, Basic4GL, XBasic, wxBasic, SdlBasic, and Basic-256, as well as object oriented through Gambas, FreeBASIC, B4X, Basic for Qt, Phoenix Object Basic, NS Basic, ProvideX, Chipmunk Basic, RapidQ and Xojo. Pascal is implemented through GNU Pascal, Free Pascal, and Virtual Pascal, as well as graphically via Lazarus, PascalABC.NET, or Delphi using FireMonkey (previously through Borland Kylix). A common feature of Unix-like systems, Linux includes traditional specific-purpose programming languages targeted at scripting, text processing and system configuration and management in general. Linux distributions support shell scripts, awk, sed and make. Many programs also have an embedded programming language to support configuring or programming themselves. For example, regular expressions are supported in programs like grep and locate, the traditional Unix message transfer agent Sendmail contains its own Turing complete scripting system, and the advanced text editor GNU Emacs is built around a general purpose Lisp interpreter. Most distributions also include support for PHP, Perl, Ruby, Python and other dynamic languages. While not as common, Linux also supports C# and other CLI languages (via Mono), Vala, and Scheme. Guile Scheme acts as an extension language targeting the GNU system utilities, seeking to make the conventionally small, static, compiled C programs of Unix design rapidly and dynamically extensible via an elegant, functional high-level scripting system; many GNU programs can be compiled with optional Guile bindings to this end. A number of Java virtual machines and development kits run on Linux, including the original Sun Microsystems JVM (HotSpot), and IBM's J2SE RE, as well as many open-source projects like Kaffe and Jikes RVM; Kotlin, Scala, Groovy and other JVM languages are also available. GNOME and KDE are popular desktop environments and provide a framework for developing applications. These projects are based on the GTK and Qt widget toolkits, respectively, which can also be used independently of the larger framework. Both support a wide variety of languages. There are a number of Integrated development environments available including Anjuta, Code::Blocks, CodeLite, Eclipse, Geany, ActiveState Komodo, KDevelop, Lazarus, MonoDevelop, NetBeans, and Qt Creator, while the long-established editors Vim, nano and Emacs remain popular. Hardware support The Linux kernel is a widely ported operating system kernel, available for devices ranging from mobile phones to supercomputers; it runs on a highly diverse range of computer architectures, including ARM-based Android smartphones and the IBM Z mainframes. Specialized distributions and kernel forks exist for less mainstream architectures; for example, the ELKS kernel fork can run on Intel 8086 or Intel 80286 16-bit microprocessors, while the μClinux kernel fork may run on systems without a memory management unit. The kernel also runs on architectures that were only ever intended to use a proprietary manufacturer-created operating system, such as Macintosh computers (with PowerPC, Intel, and Apple silicon processors), PDAs, video game consoles, portable music players, and mobile phones. Linux has a reputation for supporting old hardware very well by maintaining standardized drivers for a long time. There are several industry associations and hardware conferences devoted to maintaining and improving support for diverse hardware under Linux, such as FreedomHEC. Over time, support for different hardware has improved in Linux, resulting in any off-the-shelf purchase having a "good chance" of being compatible. In 2014, a new initiative was launched to automatically collect a database of all tested hardware configurations. Uses Market share and uptake Many quantitative studies of free/open-source software focus on topics including market share and reliability, with numerous studies specifically examining Linux. The Linux market is growing, and the Linux operating system market size is expected to see a growth of 19.2% by 2027, reaching $15.64 billion, compared to $3.89 billion in 2019. Analysts project a Compound Annual Growth Rate (CAGR) of 13.7% between 2024 and 2032, culminating in a market size of US$34.90 billion by the latter year.[citation needed] Analysts and proponents attribute the relative success of Linux to its security, reliability, low cost, and freedom from vendor lock-in. As of 2024, estimates suggest Linux accounts for at least 80% of the public cloud workload, partly thanks to its widespread use in platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. ZDNet reports that 96.3% of the top one million web servers are running Linux. W3Techs state that Linux powers at least 39.2% of websites whose operating system is known, with other estimates saying 55%. Copyright, trademark, and naming The Linux kernel is licensed under the GNU General Public License (GPL), version 2. The GPL requires that anyone who distributes software based on source code under this license must make the originating source code (and any modifications) available to the recipient under the same terms. Other key components of a typical Linux distribution are also mainly licensed under the GPL, but they may use other licenses; many libraries use the GNU Lesser General Public License (LGPL), a more permissive variant of the GPL, and the X.Org implementation of the X Window System uses the MIT License. Torvalds states that the Linux kernel will not move from version 2 of the GPL to version 3. He specifically dislikes some provisions in the new license which prohibit the use of the software in digital rights management. It would also be impractical to obtain permission from all the copyright holders, who number in the thousands. A 2001 study of Red Hat Linux 7.1 found that this distribution contained 30 million lines of source code. Using the Constructive Cost Model, the study estimated that this distribution required about eight thousand person-years of development time. According to the study, if all this software had been developed by conventional proprietary means, it would have cost about US$1.86 billion to develop in 2024 in the United States. Most of the source code (71%) was written in the C programming language, but many other languages were used, including C++, Lisp, assembly language, Perl, Python, Fortran, and various shell scripting languages. Slightly over half of all lines of code were licensed under the GPL. The Linux kernel itself was 2.4 million lines of code, or 8% of the total. In a later study, the same analysis was performed for Debian version 4.0 (etch, which was released in 2007). This distribution contained close to 283 million lines of source code, and the study estimated that it would have required about seventy three thousand man-years and cost US$10.4 billion (in 2024 dollars) to develop by conventional means. In the United States, the name Linux is a trademark registered to Linus Torvalds. Initially, nobody registered it. However, on August 15, 1994, William R. Della Croce Jr. filed for the trademark Linux, and then demanded royalties from Linux distributors. In 1996, Torvalds and some affected organizations sued him to have the trademark assigned to Torvalds, and, in 1997, the case was settled. The licensing of the trademark has since been handled by the Linux Mark Institute (LMI). Torvalds has stated that he trademarked the name only to prevent someone else from using it. LMI originally charged a nominal sublicensing fee for use of the Linux name as part of trademarks, but later changed this in favor of offering a free, perpetual worldwide sublicense. The Free Software Foundation (FSF) prefers GNU/Linux as the name when referring to the operating system as a whole, because it considers Linux distributions to be variants of the GNU operating system initiated in 1983 by Richard Stallman, president of the FSF. The foundation explicitly takes no issue over the name Android for the Android OS, which is also an operating system based on the Linux kernel, as GNU is not a part of it. A minority of public figures and software projects other than Stallman and the FSF, notably distributions consisting of only free software, such as Debian (which had been sponsored by the FSF up to 1996), also use GNU/Linux when referring to the operating system as a whole. Most media and common usage, however, refers to this family of operating systems simply as Linux, as do many large Linux distributions (for example, SUSE Linux and Red Hat Enterprise Linux). As of May 2011[update], about 8% to 13% of the lines of code of the Linux distribution Ubuntu (version "Natty") is made of GNU components (the range depending on whether GNOME is considered part of GNU); meanwhile, 6% is taken by the Linux kernel, increased to 9% when including its direct dependencies. See also Notes References External links |
======================================== |
[SOURCE: https://www.fast.ai/posts/2021-11-23-data-for-good.html] | [TOKENS: 1255] |
Doing Data Science for Social Good, Responsibly Rachel Thomas November 23, 2021 On this page The phrase “data science for social good” is a broad umbrella, ambiguously defined. As many others have pointed out, the term often fails to specify good for whom. Data science for social good can be used to refer to: nonprofits increasing their impact through more effective data use, hollow corporate PR efforts from big tech, well-intentioned projects that inadvertently result in surveillance and privacy invasion of marginalized groups, efforts seeped in colonialism, or many other types of projects. Note that none of the categories in the previous list are mutually exclusive, and one project may fit several of these descriptors. "Data for good" is an imprecise term that says little about who we serve, the tools used, or the goals. Being more precise can help us be more accountable & have greater positive impact. @sarahookr presents at @DataInstituteSF lunch seminar pic.twitter.com/efAMJxdQB8 Picture from a presentation given in 2018 by Sara Hooker, founder of non-profit Delta Analytics and an AI researcher at Google, on Why “data for good” lacks precision. I have been involved with data science for social good efforts for several years: chairing the Data for Good track at the USF Data Institute Conference in 2017; coordinating and mentoring graduate students in internships with nonprofits Human Rights Data Analysis Group (for a project on entity resolution to obtain more accurate casualty conflicts in Syria and Sri Lanka) and the American Civil Liberties Union (one student analyzed covid vaccine equity in California and another analyzed disparities in school disciplinary action against Black and disabled students) during my time as director of the Center for Applied Data Ethics at USF; and now as a co-lead of the Data Science for Social Good program at Queensland University of Technology (QUT). At QUT, grad students and recent graduates partnered with non-profits Cancer Council Queensland (well known for their Australian Cancer Atlas) and FareShare food rescue organisation, which operates Australia’s largest charity kitchens. While data for good projects can be incredibly useful, there are also pitfalls to be mindful of when approaching data for social good. Some Questions & Answers I recently spoke on a panel at the QUT Data Science for Social Good showcase event. I appreciated the thoughtful, nuanced questions from the moderators, Dr. Timothy Graham and Dr. Char-lee Moyle, who brought up some of the potential risks. I want to share their questions below, along with an expanded version of my answers. My impression is that some folks use machine learning to try to "solve" problems of artificial scarcity. Eg: we won't give everyone the healthcare they need, so let's use ML to decide who to deny.Question: What have you read about this? What examples have you seen? Many well-meaning projects inadvertently lead to increased surveillance, despite good intentions. Cell-phone data from millions of phone owners in over two dozen low- and middle- income countries has been anonymized and analyzed in the wake of humanitarian disasters. This data raises concern of the lack of consent of the phone users and risks of de-anonymization. Furthermore, it is often questionable whether the results are truly useful, as well as if they could have been discovered through other, less invasive approaches. One such project analyzed the cell phone data of people in Sierra Leone during an Ebola outbreak. However, this approach didn’t address how Ebola spreads (only through direct contact with body fluids) or help with the most urgent issue (which was convincing symptomatic people to come to clinics to isolate). Academia and government have a big role to play. Often non-profits lack the in-house data science skill to take advantage of their data, and many data scientists who are searching for meaningful and impactful real-world problems to work on. We will also need the government to regulate topics such as data privacy to help protect those who may be impacted. It is important to recognize that privacy should not just be considered an individual right, but also a public good. We need ethical frameworks AND regulation. Both are crucially important. Many people want to do the right thing, and having standardized processes to guide them can help. I recommend the Markkula Center Tech Ethics Toolkit, which includes practical processes you can implement in your organization to try to identify ethical risks BEFORE they cause harm. At the same time, we need legal protections anywhere that data science impacts human rights and civil rights. Meaningful consequences are needed for those who cause harm to others. Also, policy is the appropriate tool to address negative externalities, such as when corporations offset their costs and harms to society while reaping the profits for themselves. Otherwise, there will always be a race to the bottom. The people who are already working for an organization are best positioned to understand that organization’s problems and challenges, and where data science can help. Upskilling in-house talent is underutilized. Don’t feel that you need to hire someone new with a fancy pedigree, if there are people at your organization who are interested and eager to learn. I would start by learning to code in Python. Have a project from your not-for-profit that you are working on as you go, and let that project motivate you to learn what you need as you need to (rather than feeling like you need to spend years studying before you can tackle the problems you care about). One of our core missions with fast.ai is to train people in different domains to use machine learning for themselves, as they best understand the problems in their domain and what is needed. There are many myths that you need a super-elite background to use techniques like deep learning, but it’s not magic. Anyone with a year of coding experience can learn to use state-of-the-art deep learning. Further Reading/Watching Here are some additional articles (and one video) that I recommend to learn more on this topic: - Narratives and Counternarratives on Data Sharing in Africa - Why “data for good” lacks precision - Can tracking people through phone-call data improve lives? - A New AI Lexicon: Social good - fast.ai Practical Data Ethics Week 6: Algorithmic Colonialism |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minkowski_metric] | [TOKENS: 19597] |
Contents Minkowski space In physics, Minkowski space (or Minkowski spacetime) (/mɪŋˈkɔːfski, -ˈkɒf-/) is the main mathematical description of spacetime in the absence of gravitation. It combines inertial space and time manifolds into a four-dimensional model. The model helps show how a spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Mathematician Hermann Minkowski developed it from the work of Hendrik Lorentz, Henri Poincaré, and others, and said it "was grown on experimental physical grounds". Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure by which special relativity is formalized. While the individual components in Euclidean space and time might differ due to length contraction and time dilation, in Minkowski spacetime, all frames of reference will agree on the total interval in spacetime between events.[nb 1] Minkowski space differs from four-dimensional Euclidean space insofar as it treats time differently from the three spatial dimensions. In 3-dimensional Euclidean space, the isometry group (maps preserving the regular Euclidean distance) is the Euclidean group. It is generated by rotations, reflections and translations. When time is appended as a fourth dimension, the further transformations of translations in time and Lorentz boosts are added, and the group of all these transformations is called the Poincaré group. Minkowski's model follows special relativity, where motion causes time dilation changing the scale applied to the frame in motion and shifts the phase of light. Minkowski space is a pseudo-Euclidean space equipped with an isotropic quadratic form called the spacetime interval or the Minkowski norm squared. An event in Minkowski space for which the spacetime interval is zero is on the null cone of the origin, called the light cone in Minkowski space. Using the polarization identity the quadratic form is converted to a symmetric bilinear form called the Minkowski inner product, though it is not a geometric inner product. Another misnomer is Minkowski metric,[further explanation needed] but Minkowski space is not a metric space. The group of transformations for Minkowski space that preserves the spacetime interval (as opposed to the spatial Euclidean distance) is the Lorentz group (as opposed to the Galilean group). History In his second relativity paper in 1905, Henri Poincaré showed how, by taking time to be an imaginary fourth spacetime coordinate ict, where c is the speed of light and i is the imaginary unit, Lorentz transformations can be visualized as ordinary rotations of the four-dimensional Euclidean sphere. The four-dimensional spacetime can be visualized as a four-dimensional space, with each point representing an event in spacetime. The Lorentz transformations can then be thought of as rotations in this four-dimensional space, where the rotation axis corresponds to the direction of relative motion between the two observers and the rotation angle is related to their relative velocity. To understand this concept, one should consider the coordinates of an event in spacetime represented as a four-vector (t, x, y, z). A Lorentz transformation is represented by a matrix that acts on the four-vector, changing its components. This matrix can be thought of as a rotation matrix in four-dimensional space, which rotates the four-vector around a particular axis. x 2 + y 2 + z 2 + ( i c t ) 2 = constant . {\displaystyle x^{2}+y^{2}+z^{2}+(ict)^{2}={\text{constant}}.} Rotations in planes spanned by two space unit vectors appear in coordinate space as well as in physical spacetime as Euclidean rotations and are interpreted in the ordinary sense. The "rotation" in a plane spanned by a space unit vector and a time unit vector, while formally still a rotation in coordinate space, is a Lorentz boost in physical spacetime with real inertial coordinates. The analogy with Euclidean rotations is only partial since the radius of the sphere is actually imaginary, which turns rotations into rotations in hyperbolic space (see hyperbolic rotation). This idea, which was mentioned only briefly by Poincaré, was elaborated by Minkowski in a paper in German published in 1908 called "The Fundamental Equations for Electromagnetic Processes in Moving Bodies". He reformulated Maxwell equations as a symmetrical set of equations in the four variables (x, y, z, ict) combined with redefined vector variables for electromagnetic quantities, and he was able to show directly and very simply their invariance under Lorentz transformation. He also made other important contributions and used matrix notation for the first time in this context. From his reformulation, he concluded that time and space should be treated equally, and so arose his concept of events taking place in a unified four-dimensional spacetime continuum. In a further development in his 1908 "Space and Time" lecture, Minkowski gave an alternative formulation of this idea that used a real time coordinate instead of an imaginary one, representing the four variables (x, y, z, t) of space and time in the coordinate form in a four-dimensional real vector space. Points in this space correspond to events in spacetime. In this space, there is a defined light-cone associated with each point, and events not on the light cone are classified by their relation to the apex as spacelike or timelike. It is principally this view of spacetime that is current nowadays, although the older view involving imaginary time has also influenced special relativity. In the English translation of Minkowski's paper, the Minkowski metric, as defined below, is referred to as the line element. The Minkowski inner product below appears unnamed when referring to orthogonality (which he calls normality) of certain vectors, and the Minkowski norm squared is referred to as "sum" (a word choice that might be attributable to language translation). Minkowski's principal tool is the Minkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.g., proper time and length contraction) and to provide geometrical interpretation to the generalization of Newtonian mechanics to relativistic mechanics. For these special topics, see the referenced articles, as the presentation below will be principally confined to the mathematical structure (Minkowski metric and from it derived quantities and the Poincaré group as symmetry group of spacetime) following from the invariance of the spacetime interval on the spacetime manifold as consequences of the postulates of special relativity, not to specific application or derivation of the invariance of the spacetime interval. This structure provides the background setting of all present relativistic theories, barring general relativity for which flat Minkowski spacetime still provides a springboard as curved spacetime is locally Lorentzian. Minkowski, aware of the fundamental restatement of the theory which he had made, said The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth, space by itself and time by itself are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality. — Hermann Minkowski, 1908, 1909 Though Minkowski took an important step for physics, Albert Einstein saw its limitation: At a time when Minkowski was giving the geometrical interpretation of special relativity by extending the Euclidean three-space to a quasi-Euclidean four-space that included time, Einstein was already aware that this is not valid, because it excludes the phenomenon of gravitation. He was still far from the study of curvilinear coordinates and Riemannian geometry, and the heavy mathematical apparatus entailed. For further historical information see references Galison (1979), Corry (1997) and Walter (1999). Causal structure Where v is velocity, x, y, and z are Cartesian coordinates in 3-dimensional space, c is the constant representing the universal speed limit, and t is time, the four-dimensional vector v = (ct, x, y, z) = (ct, r) is classified according to the sign of c2t2 − r2. A vector is timelike if c2t2 > r2, spacelike if c2t2 < r2, and null or lightlike if c2t2 = r2. This can be expressed in terms of the sign of η(v, v), also called scalar product, as well, which depends on the signature. The classification of any vector will be the same in all frames of reference that are related by a Lorentz transformation (but not by a general Poincaré transformation because the origin may then be displaced) because of the invariance of the spacetime interval under Lorentz transformation. The set of all null vectors at an event[nb 2] of Minkowski space constitutes the light cone of that event. Given a timelike vector v, there is a worldline of constant velocity associated with it, represented by a straight line in a Minkowski diagram. Once a direction of time is chosen,[nb 3] timelike and null vectors can be further decomposed into various classes. For timelike vectors, one has Null vectors fall into three classes: Together with spacelike vectors, there are 6 classes in all. An orthonormal basis for Minkowski space necessarily consists of one timelike and three spacelike unit vectors. If one wishes to work with non-orthonormal bases, it is possible to have other combinations of vectors. For example, one can easily construct a (non-orthonormal) basis consisting entirely of null vectors, called a null basis. Vector fields are called timelike, spacelike, or null if the associated vectors are timelike, spacelike, or null at each point where the field is defined. Properties of time-like vectors Time-like vectors have special importance in the theory of relativity as they correspond to events that are accessible to the observer at (0, 0, 0, 0) with a speed less than that of light. Of most interest are time-like vectors that are similarly directed, i.e. all either in the forward or in the backward cones. Such vectors have several properties not shared by space-like vectors. These arise because both forward and backward cones are convex, whereas the space-like region is not convex. The scalar product of two time-like vectors u1 = (t1, x1, y1, z1) and u2 = (t2, x2, y2, z2) is η ( u 1 , u 2 ) = u 1 ⋅ u 2 = c 2 t 1 t 2 − x 1 x 2 − y 1 y 2 − z 1 z 2 . {\displaystyle \eta (u_{1},u_{2})=u_{1}\cdot u_{2}=c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}.} Positivity of scalar product: An important property is that the scalar product of two similarly directed time-like vectors is always positive. This can be seen from the reversed Cauchy–Schwarz inequality below. It follows that if the scalar product of two vectors is zero, then one of these, at least, must be space-like. The scalar product of two space-like vectors can be positive or negative as can be seen by considering the product of two space-like vectors having orthogonal spatial components and times either of different or the same signs. Using the positivity property of time-like vectors, it is easy to verify that a linear sum with positive coefficients of similarly directed time-like vectors is also similarly directed time-like (the sum remains within the light cone because of convexity). The norm of a time-like vector u = (ct, x, y, z) is defined as ‖ u ‖ = η ( u , u ) = c 2 t 2 − x 2 − y 2 − z 2 {\displaystyle \left\|u\right\|={\sqrt {\eta (u,u)}}={\sqrt {c^{2}t^{2}-x^{2}-y^{2}-z^{2}}}} The reversed Cauchy inequality is another consequence of the convexity of either light cone. For two distinct similarly directed time-like vectors u1 and u2 this inequality is η ( u 1 , u 2 ) > ‖ u 1 ‖ ‖ u 2 ‖ {\displaystyle \eta (u_{1},u_{2})>\left\|u_{1}\right\|\left\|u_{2}\right\|} or algebraically, c 2 t 1 t 2 − x 1 x 2 − y 1 y 2 − z 1 z 2 > ( c 2 t 1 2 − x 1 2 − y 1 2 − z 1 2 ) ( c 2 t 2 2 − x 2 2 − y 2 2 − z 2 2 ) {\displaystyle c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}>{\sqrt {\left(c^{2}t_{1}^{2}-x_{1}^{2}-y_{1}^{2}-z_{1}^{2}\right)\left(c^{2}t_{2}^{2}-x_{2}^{2}-y_{2}^{2}-z_{2}^{2}\right)}}} From this, the positive property of the scalar product can be seen. For two similarly directed time-like vectors u and w, the inequality is ‖ u + w ‖ ≥ ‖ u ‖ + ‖ w ‖ , {\displaystyle \left\|u+w\right\|\geq \left\|u\right\|+\left\|w\right\|,} where the equality holds when the vectors are linearly dependent. The proof uses the algebraic definition with the reversed Cauchy inequality: ‖ u + w ‖ 2 = ‖ u ‖ 2 + 2 ( u , w ) + ‖ w ‖ 2 ≥ ‖ u ‖ 2 + 2 ‖ u ‖ ‖ w ‖ + ‖ w ‖ 2 = ( ‖ u ‖ + ‖ w ‖ ) 2 . {\displaystyle {\begin{aligned}\left\|u+w\right\|^{2}&=\left\|u\right\|^{2}+2\left(u,w\right)+\left\|w\right\|^{2}\\[5mu]&\geq \left\|u\right\|^{2}+2\left\|u\right\|\left\|w\right\|+\left\|w\right\|^{2}=\left(\left\|u\right\|+\left\|w\right\|\right)^{2}.\end{aligned}}} The result now follows by taking the square root on both sides. Mathematical structure It is assumed below that spacetime is endowed with a coordinate system corresponding to an inertial frame. This provides an origin, which is necessary for spacetime to be modeled as a vector space. This addition is not required, and more complex treatments analogous to an affine space can remove the extra structure. However, this is not the introductory convention and is not covered here. For an overview, Minkowski space is a 4-dimensional real vector space equipped with a non-degenerate, symmetric bilinear form on the tangent space at each point in spacetime, here simply called the Minkowski inner product, with metric signature either (+ − − −) or (− + + +). The tangent space at each event is a vector space of the same dimension as spacetime, 4. In practice, one need not be concerned with the tangent spaces. The vector space structure of Minkowski space allows for the canonical identification of vectors in tangent spaces at points (events) with vectors (points, events) in Minkowski space itself. See e.g. Lee (2003, Proposition 3.8.) or Lee (2012, Proposition 3.13.) These identifications are routinely done in mathematics. They can be expressed formally in Cartesian coordinates as ( x 0 , x 1 , x 2 , x 3 ) ↔ x 0 e 0 | p + x 1 e 1 | p + x 2 e 2 | p + x 3 e 3 | p ↔ x 0 e 0 | q + x 1 e 1 | q + x 2 e 2 | q + x 3 e 3 | q {\displaystyle {\begin{aligned}\left(x^{0},\,x^{1},\,x^{2},\,x^{3}\right)\ &\leftrightarrow \ \left.x^{0}\mathbf {e} _{0}\right|_{p}+\left.x^{1}\mathbf {e} _{1}\right|_{p}+\left.x^{2}\mathbf {e} _{2}\right|_{p}+\left.x^{3}\mathbf {e} _{3}\right|_{p}\\&\leftrightarrow \ \left.x^{0}\mathbf {e} _{0}\right|_{q}+\left.x^{1}\mathbf {e} _{1}\right|_{q}+\left.x^{2}\mathbf {e} _{2}\right|_{q}+\left.x^{3}\mathbf {e} _{3}\right|_{q}\end{aligned}}} with basis vectors in the tangent spaces defined by e μ | p = ∂ ∂ x μ | p or e 0 | p = ( 1 0 0 0 ) , etc . {\displaystyle \left.\mathbf {e} _{\mu }\right|_{p}=\left.{\frac {\partial }{\partial x^{\mu }}}\right|_{p}{\text{ or }}\mathbf {e} _{0}|_{p}=\left({\begin{matrix}1\\0\\0\\0\end{matrix}}\right){\text{, etc}}.} Here, p and q are any two events, and the second basis vector identification is referred to as parallel transport. The first identification is the canonical identification of vectors in the tangent space at any point with vectors in the space itself. The appearance of basis vectors in tangent spaces as first-order differential operators is due to this identification. It is motivated by the observation that a geometrical tangent vector can be associated in a one-to-one manner with a directional derivative operator on the set of smooth functions. This is promoted to a definition of tangent vectors in manifolds not necessarily being embedded in Rn. This definition of tangent vectors is not the only possible one, as ordinary n-tuples can be used as well. A tangent vector at a point p may be defined, here specialized to Cartesian coordinates in Lorentz frames, as 4 × 1 column vectors v associated to each Lorentz frame related by Lorentz transformation Λ such that the vector v in a frame related to some frame by Λ transforms according to v → Λv. This is the same way in which the coordinates xμ transform. Explicitly, x ′ μ = Λ μ ν x ν , v ′ μ = Λ μ ν v ν . {\displaystyle {\begin{aligned}x'^{\mu }&={\Lambda ^{\mu }}_{\nu }x^{\nu },\\v'^{\mu }&={\Lambda ^{\mu }}_{\nu }v^{\nu }.\end{aligned}}} This definition is equivalent to the definition given above under a canonical isomorphism. For some purposes, it is desirable to identify tangent vectors at a point p with displacement vectors at p, which is, of course, admissible by essentially the same canonical identification. The identifications of vectors referred to above in the mathematical setting can correspondingly be found in a more physical and explicitly geometrical setting in Misner, Thorne & Wheeler (1973). They offer various degrees of sophistication (and rigor) depending on which part of the material one chooses to read. The metric signature refers to which sign the Minkowski inner product yields when given space (spacelike to be specific, defined further down) and time basis vectors (timelike) as arguments. Further discussion about this theoretically inconsequential but practically necessary choice for purposes of internal consistency and convenience is deferred to the hide box below. See also the page treating sign convention in Relativity. In general, but with several exceptions, mathematicians and general relativists prefer spacelike vectors to yield a positive sign, (− + + +), while particle physicists tend to prefer timelike vectors to yield a positive sign, (+ − − −). Authors covering several areas of physics, e.g. Steven Weinberg and Landau and Lifshitz ((− + + +) and (+ − − −), respectively) stick to one choice regardless of topic. Arguments for the former convention include "continuity" from the Euclidean case corresponding to the non-relativistic limit c → ∞. Arguments for the latter include that minus signs, otherwise ubiquitous in particle physics, go away. Yet other authors, especially of introductory texts, e.g. Kleppner & Kolenkow (1978), do not choose a signature at all, but instead, opt to coordinatize spacetime such that the time coordinate (but not time itself!) is imaginary. This removes the need for the explicit introduction of a metric tensor (which may seem like an extra burden in an introductory course), and one need not be concerned with covariant vectors and contravariant vectors (or raising and lowering indices) to be described below. The inner product is instead effected by a straightforward extension of the dot product from R 3 {\displaystyle \mathbb {R} ^{3}} over to C × R 3 . {\displaystyle \mathbb {C} \times \mathbb {R} ^{3}.} This works in the flat spacetime of special relativity, but not in the curved spacetime of general relativity, see Misner, Thorne & Wheeler (1973, Box 2.1, "Farewell to i c t ") (who, by the way use (− + + +) ). MTW also argues that it hides the true indefinite nature of the metric and the true nature of Lorentz boosts, which are not rotations. It also needlessly complicates the use of tools of differential geometry that are otherwise immediately available and useful for geometrical description and calculation – even in the flat spacetime of special relativity, e.g. of the electromagnetic field. Mathematically associated with the bilinear form is a tensor of type (0,2) at each point in spacetime, called the Minkowski metric.[nb 4] The Minkowski metric, the bilinear form, and the Minkowski inner product are all the same object; it is a bilinear function that accepts two (contravariant) vectors and returns a real number. In coordinates, this is the 4×4 matrix representing the bilinear form. For comparison, in general relativity, a Lorentzian manifold L is likewise equipped with a metric tensor g, which is a nondegenerate symmetric bilinear form on the tangent space TpL at each point p of L. In coordinates, it may be represented by a 4×4 matrix depending on spacetime position. Minkowski space is thus a comparatively simple special case of a Lorentzian manifold. Its metric tensor is in coordinates with the same symmetric matrix at every point of M, and its arguments can, per above, be taken as vectors in spacetime itself. Introducing more terminology (but not more structure), Minkowski space is thus a pseudo-Euclidean space with total dimension n = 4 and signature (1, 3) or (3, 1). Elements of Minkowski space are called events. Minkowski space is often denoted R1,3 or R3,1 to emphasize the chosen signature, or just M. It is an example of a pseudo-Riemannian manifold. Then mathematically, the metric is a bilinear form on an abstract four-dimensional real vector space V, that is, η : V × V → R {\displaystyle \eta :V\times V\rightarrow \mathbf {R} } where η has signature (+, -, -, -), and signature is a coordinate-invariant property of η. The space of bilinear maps forms a vector space which can be identified with M ∗ ⊗ M ∗ {\displaystyle M^{*}\otimes M^{*}} , and η may be equivalently viewed as an element of this space. By making a choice of orthonormal basis { e μ } {\displaystyle \{e_{\mu }\}} , M := ( V , η ) {\displaystyle M:=(V,\eta )} can be identified with the space R 1 , 3 := ( R 4 , η μ ν ) {\displaystyle \mathbf {R} ^{1,3}:=(\mathbf {R} ^{4},\eta _{\mu \nu })} . The notation is meant to emphasize the fact that M and R 1 , 3 {\displaystyle \mathbf {R} ^{1,3}} are not just vector spaces but have added structure. η μ ν = diag ( + 1 , − 1 , − 1 , − 1 ) {\displaystyle \eta _{\mu \nu }={\text{diag}}(+1,-1,-1,-1)} . An interesting example of non-inertial coordinates for (part of) Minkowski spacetime is the Born coordinates. Another useful set of coordinates is the light-cone coordinates. The Minkowski inner product is not an inner product, since it has non-zero null vectors. Since it is not a definite bilinear form it is called indefinite. The Minkowski metric η is the metric tensor of Minkowski space. It is a pseudo-Euclidean metric, or more generally, a constant pseudo-Riemannian metric in Cartesian coordinates. As such, it is a nondegenerate symmetric bilinear form, a type (0, 2) tensor. It accepts two arguments up, vp, vectors in TpM, p ∈ M, the tangent space at p in M. Due to the above-mentioned canonical identification of TpM with M itself, it accepts arguments u, v with both u and v in M. As a notational convention, vectors v in M, called 4-vectors, are denoted in italics, and not, as is common in the Euclidean setting, with boldface v. The latter is generally reserved for the 3-vector part (to be introduced below) of a 4-vector. The definition u ⋅ v = η ( u , v ) {\displaystyle u\cdot v=\eta (u,\,v)} yields an inner product-like structure on M, previously and also henceforth, called the Minkowski inner product, similar to the Euclidean inner product, but it describes a different geometry. It is also called the relativistic dot product. If the two arguments are the same, u ⋅ u = η ( u , u ) ≡ ‖ u ‖ 2 ≡ u 2 , {\displaystyle u\cdot u=\eta (u,u)\equiv \|u\|^{2}\equiv u^{2},} the resulting quantity will be called the Minkowski norm squared. The Minkowski inner product satisfies the following properties. The first two conditions imply bilinearity. The most important feature of the inner product and norm squared is that these are quantities unaffected by Lorentz transformations. In fact, it can be taken as the defining property of a Lorentz transformation in that it preserves the inner product (i.e. the value of the corresponding bilinear form on two vectors). This approach is taken more generally for all classical groups definable this way in classical group. There, the matrix Φ is identical in the case O(3, 1) (the Lorentz group) to the matrix η to be displayed below. Minkowski space is constructed so that the speed of light will be the same constant regardless of the reference frame in which it is measured. This property results from the relation of the time axis to a space axis. Two events u and v are orthogonal when the bilinear form is zero for them: η(v, w) = 0. When both u and v are both space-like, then they are perpendicular, but if one is time-like and the other space-like, then the relation is hyperbolic orthogonality. The relation is preserved in a change of reference frames and consequently the computation of light speed yields a constant result. The change of reference frame is called a Lorentz boost and in mathematics it is a hyperbolic rotation. Each reference frame is associated with a hyperbolic angle, which is zero for the rest frame in Minkowski space. Such a hyperbolic angle has been labelled rapidity since it is associated with the speed of the frame. From the second postulate of special relativity, together with homogeneity of spacetime and isotropy of space, it follows that the spacetime interval between two arbitrary events called 1 and 2 is: c 2 ( t 1 − t 2 ) 2 − ( x 1 − x 2 ) 2 − ( y 1 − y 2 ) 2 − ( z 1 − z 2 ) 2 . {\displaystyle c^{2}\left(t_{1}-t_{2}\right)^{2}-\left(x_{1}-x_{2}\right)^{2}-\left(y_{1}-y_{2}\right)^{2}-\left(z_{1}-z_{2}\right)^{2}.} This quantity is not consistently named in the literature. The interval is sometimes referred to as the square root of the interval as defined here. The invariance of the interval under coordinate transformations between inertial frames follows from the invariance of c 2 t 2 − x 2 − y 2 − z 2 {\displaystyle c^{2}t^{2}-x^{2}-y^{2}-z^{2}} provided the transformations are linear. This quadratic form can be used to define a bilinear form u ⋅ v = c 2 t 1 t 2 − x 1 x 2 − y 1 y 2 − z 1 z 2 {\displaystyle u\cdot v=c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}} via the polarization identity. This bilinear form can in turn be written as u ⋅ v = u T [ η ] v , {\displaystyle u\cdot v=u^{\textsf {T}}\,[\eta ]\,v,} where [η] is a 4 × 4 {\displaystyle 4\times 4} matrix associated with η. While possibly confusing, it is common practice to denote [η] with just η. The matrix is read off from the explicit bilinear form as η = ( 1 0 0 0 0 − 1 0 0 0 0 − 1 0 0 0 0 − 1 ) , {\displaystyle \eta =\left({\begin{array}{r}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{array}}\right)\!,} and the bilinear form u ⋅ v = η ( u , v ) , {\displaystyle u\cdot v=\eta (u,v),} with which this section started by assuming its existence, is now identified. For definiteness and shorter presentation, the signature (− + + +) is adopted below. This choice (or the other possible choice) has no (known) physical implications. The symmetry group preserving the bilinear form with one choice of signature is isomorphic (under the map given here) with the symmetry group preserving the other choice of signature. This means that both choices are in accord with the two postulates of relativity. Switching between the two conventions is straightforward. If the metric tensor η has been used in a derivation, go back to the earliest point where it was used, substitute η for −η, and retrace forward to the desired formula with the desired metric signature. A standard or orthonormal basis for Minkowski space is a set of four mutually orthogonal vectors {e0, e1, e2, e3} such that η ( e 0 , e 0 ) = − η ( e 1 , e 1 ) = − η ( e 2 , e 2 ) = − η ( e 3 , e 3 ) = 1 {\displaystyle \eta (e_{0},e_{0})=-\eta (e_{1},e_{1})=-\eta (e_{2},e_{2})=-\eta (e_{3},e_{3})=1} and for which η ( e μ , e ν ) = 0 {\displaystyle \eta (e_{\mu },e_{\nu })=0} when μ ≠ ν . {\textstyle \mu \neq \nu \,.} These conditions can be written compactly in the form η ( e μ , e ν ) = η μ ν . {\displaystyle \eta (e_{\mu },e_{\nu })=\eta _{\mu \nu }.} Relative to a standard basis, the components of a vector v are written (v0, v1, v2, v3) where the Einstein notation is used to write v = vμ eμ. The component v0 is called the timelike component of v while the other three components are called the spatial components. The spatial components of a 4-vector v may be identified with a 3-vector v = (v1, v2, v3). In terms of components, the Minkowski inner product between two vectors v and w is given by η ( v , w ) = η μ ν v μ w ν = v 0 w 0 + v 1 w 1 + v 2 w 2 + v 3 w 3 = v μ w μ = v μ w μ , {\displaystyle \eta (v,w)=\eta _{\mu \nu }v^{\mu }w^{\nu }=v^{0}w_{0}+v^{1}w_{1}+v^{2}w_{2}+v^{3}w_{3}=v^{\mu }w_{\mu }=v_{\mu }w^{\mu },} and η ( v , v ) = η μ ν v μ v ν = v 0 v 0 + v 1 v 1 + v 2 v 2 + v 3 v 3 = v μ v μ . {\displaystyle \eta (v,v)=\eta _{\mu \nu }v^{\mu }v^{\nu }=v^{0}v_{0}+v^{1}v_{1}+v^{2}v_{2}+v^{3}v_{3}=v^{\mu }v_{\mu }.} Here lowering of an index with the metric was used. There are many possible choices of standard basis obeying the condition η ( e μ , e ν ) = η μ ν . {\displaystyle \eta (e_{\mu },e_{\nu })=\eta _{\mu \nu }.} Any two such bases are related in some sense by a Lorentz transformation, either by a change-of-basis matrix Λ ν μ {\displaystyle \Lambda _{\nu }^{\mu }} , a real 4 × 4 matrix satisfying Λ ρ μ η μ ν Λ σ ν = η ρ σ . {\displaystyle \Lambda _{\rho }^{\mu }\eta _{\mu \nu }\Lambda _{\sigma }^{\nu }=\eta _{\rho \sigma }.} or Λ, a linear map on the abstract vector space satisfying, for any pair of vectors u, v, η ( Λ u , Λ v ) = η ( u , v ) . {\displaystyle \eta (\Lambda u,\Lambda v)=\eta (u,v).} Then if two different bases exist, {e0, e1, e2, e3} and {e′0, e′1, e′2, e′3}, e μ ′ = e ν Λ μ ν {\displaystyle e_{\mu }'=e_{\nu }\Lambda _{\mu }^{\nu }} can be represented as e μ ′ = e ν Λ μ ν {\displaystyle e_{\mu }'=e_{\nu }\Lambda _{\mu }^{\nu }} or e μ ′ = Λ e μ {\displaystyle e_{\mu }'=\Lambda e_{\mu }} . While it might be tempting to think of Λ ν μ {\displaystyle \Lambda _{\nu }^{\mu }} and Λ as the same thing, mathematically, they are elements of different spaces, and act on the space of standard bases from different sides. Technically, a non-degenerate bilinear form provides a map between a vector space and its dual; in this context, the map is between the tangent spaces of M and the cotangent spaces of M. At a point in M, the tangent and cotangent spaces are dual vector spaces (so the dimension of the cotangent space at an event is also 4). Just as an authentic inner product on a vector space with one argument fixed, by Riesz representation theorem, may be expressed as the action of a linear functional on the vector space, the same holds for the Minkowski inner product of Minkowski space. Thus if vμ are the components of a vector in tangent space, then ημν vμ = vν are the components of a vector in the cotangent space (a linear functional). Due to the identification of vectors in tangent spaces with vectors in M itself, this is mostly ignored, and vectors with lower indices are referred to as covariant vectors. In this latter interpretation, the covariant vectors are (almost always implicitly) identified with vectors (linear functionals) in the dual of Minkowski space. The ones with upper indices are contravariant vectors. In the same fashion, the inverse of the map from tangent to cotangent spaces, explicitly given by the inverse of η in matrix representation, can be used to define raising of an index. The components of this inverse are denoted ημν. It happens that ημν = ημν. These maps between a vector space and its dual can be denoted η♭ (eta-flat) and η♯ (eta-sharp) by the musical analogy. Contravariant and covariant vectors are geometrically very different objects. The first can and should be thought of as arrows. A linear function can be characterized by two objects: its kernel, which is a hyperplane passing through the origin, and its norm. Geometrically thus, covariant vectors should be viewed as a set of hyperplanes, with spacing depending on the norm (bigger = smaller spacing), with one of them (the kernel) passing through the origin. The mathematical term for a covariant vector is 1-covector or 1-form (though the latter is usually reserved for covector fields). One quantum mechanical analogy explored in the literature is that of a de Broglie wave (scaled by a factor of Planck's reduced constant) associated with a momentum four-vector to illustrate how one could imagine a covariant version of a contravariant vector. The inner product of two contravariant vectors could equally well be thought of as the action of the covariant version of one of them on the contravariant version of the other. The inner product is then how many times the arrow pierces the planes. The mathematical reference, Lee (2003), offers the same geometrical view of these objects (but mentions no piercing). The electromagnetic field tensor is a differential 2-form, which geometrical description can as well be found in MTW. One may, of course, ignore geometrical views altogether (as is the style in e.g. Weinberg (2002) and Landau & Lifshitz 2002) and proceed algebraically in a purely formal fashion. The time-proven robustness of the formalism itself, sometimes referred to as index gymnastics, ensures that moving vectors around and changing from contravariant to covariant vectors and vice versa (as well as higher order tensors) is mathematically sound. Incorrect expressions tend to reveal themselves quickly. Given a bilinear form η : M × M → R , {\displaystyle \ \eta :M\times M\rightarrow \mathbb {R} \ ,} the lowered version of a vector can be thought of as the partial evaluation of η , {\displaystyle \ \eta \ ,} that is, there is an associated partial evaluation map η ( ⋅ , − ) : M → M ∗ , v ↦ η ( v , ⋅ ) . {\displaystyle \eta (\cdot ,-):M\rightarrow M^{*}\ ~,\quad v\mapsto \eta (v,\cdot )~.} The lowered vector η ( v , ⋅ ) ∈ M ∗ {\displaystyle \ \eta (v,\cdot )\in M^{*}\ } is then the dual map u ↦ η ( v , u ) . {\displaystyle \ u\mapsto \eta (v,u)~.} Note it does not matter which argument is partially evaluated due to the symmetry of η . {\displaystyle \ \eta ~.} Non-degeneracy is then equivalent to injectivity of the partial evaluation map, or equivalently non-degeneracy indicates that the kernel of the map is trivial. In finite dimension, as is the case here, and noting that the dimension of a finite-dimensional space is equal to the dimension of the dual, this is enough to conclude the partial evaluation map is a linear isomorphism from M {\displaystyle \ M\ } to M ∗ . {\displaystyle \ M^{*}~.} This then allows the definition of the inverse partial evaluation map, η − 1 : M ∗ → M , {\displaystyle \eta ^{-1}:M^{*}\rightarrow M\ ,} which allows the inverse metric to be defined as η − 1 : M ∗ × M ∗ → R , η − 1 ( α , β ) = η ( η − 1 ( α ) , η − 1 ( β ) ) {\displaystyle \eta ^{-1}:M^{*}\times M^{*}\rightarrow \mathbb {R} \ ~,\quad \eta ^{-1}\!(\alpha ,\beta )\ =\ \eta {\bigl (}\ \eta ^{-1}\!(\alpha ),\ \eta ^{-1}\!(\beta )\ {\bigr )}\ } where the two different usages of η − 1 {\displaystyle \;\eta ^{-1}\ } can be told apart by the argument each is evaluated on. This can then be used to raise indices. If a coordinate basis is used, the metric η−1 is indeed the matrix inverse to η . The present purpose is to show semi-rigorously how formally one may apply the Minkowski metric to two vectors and obtain a real number, i.e. to display the role of the differentials and how they disappear in a calculation. The setting is that of smooth manifold theory, and concepts such as convector fields and exterior derivatives are introduced. A full-blown version of the Minkowski metric in coordinates as a tensor field on spacetime has the appearance η μ ν d x μ ⊗ d x ν = η μ ν d x μ ⊙ d x ν = η μ ν d x μ d x ν . {\displaystyle \eta _{\mu \nu }\operatorname {d} x^{\mu }\otimes \operatorname {d} x^{\nu }=\eta _{\mu \nu }\operatorname {d} x^{\mu }\odot \operatorname {d} x^{\nu }=\eta _{\mu \nu }\operatorname {d} x^{\mu }\operatorname {d} x^{\nu }~.} Explanation: The coordinate differentials are 1-form fields. They are defined as the exterior derivative of the coordinate functions xμ. These quantities evaluated at a point p provide a basis for the cotangent space at p. The tensor product (denoted by the symbol ⊗) yields a tensor field of type (0, 2), i.e. the type that expects two contravariant vectors as arguments. On the right-hand side, the symmetric product (denoted by the symbol ⊙ or by juxtaposition) has been taken. The equality holds since, by definition, the Minkowski metric is symmetric. The notation on the far right is also sometimes used for the related, but different, line element. It is not a tensor. For elaboration on the differences and similarities, see Misner, Thorne & Wheeler (1973, Box 3.2 and section 13.2.) Tangent vectors are, in this formalism, given in terms of a basis of differential operators of the first order, ∂ ∂ x μ | p , {\displaystyle \left.{\frac {\partial }{\partial x^{\mu }}}\right|_{p}\ ,} where p is an event. This operator applied to a function f gives the directional derivative of f at p in the direction of increasing xμ with xν, ν ≠ μ fixed. They provide a basis for the tangent space at p. The exterior derivative df of a function f is a covector field, i.e. an assignment of a cotangent vector to each point p, by definition such that d f ( X ) = X f , {\displaystyle \operatorname {d} f(X)=X\ f,} for each vector field X. A vector field is an assignment of a tangent vector to each point p. In coordinates X can be expanded at each point p in the basis given by the ∂/∂xν | p . Applying this with f = xμ, the coordinate function itself, and X = ∂/ ∂xν , called a coordinate vector field, one obtains d x μ ( ∂ ∂ x ν ) = ∂ x μ ∂ x ν = δ ν μ . {\displaystyle \operatorname {d} x^{\mu }\left({\frac {\partial }{\partial x^{\nu }}}\right)={\frac {\partial x^{\mu }}{\partial x^{\nu }}}=\delta _{\nu }^{\mu }~.} Since this relation holds at each point p, the dxμ|p provide a basis for the cotangent space at each p and the bases d xμ|p and ∂/∂xν |p are dual to each other, d x μ | p ( ∂ ∂ x ν | p ) = δ ν μ . {\displaystyle {\Bigl .}\operatorname {d} x^{\mu }{\Bigr |}_{p}\left(\left.{\frac {\partial }{\partial x^{\nu }}}\right|_{p}\right)=\delta _{\nu }^{\mu }~.} at each p. Furthermore, one has α ⊗ β ( a , b ) = α ( a ) β ( b ) {\displaystyle \alpha \ \otimes \ \beta (a,b)\ =\ \alpha (a)\ \beta (b)\ } for general one-forms on a tangent space α, β and general tangent vectors a, b. (This can be taken as a definition, but may also be proved in a more general setting.) Thus when the metric tensor is fed two vectors fields a, b, both expanded in terms of the basis coordinate vector fields, the result is η μ ν d x μ ⊗ d x ν ( a , b ) = η μ ν a μ b ν , {\displaystyle \eta _{\mu \nu }\ \operatorname {d} x^{\mu }\otimes \operatorname {d} x^{\nu }(a,b)\ =\ \eta _{\mu \nu }\ a^{\mu }\ b^{\nu }\ ,} where aμ, bν are the component functions of the vector fields. The above equation holds at each point p, and the relation may as well be interpreted as the Minkowski metric at p applied to two tangent vectors at p. As mentioned, in a vector space, such as modeling the spacetime of special relativity, tangent vectors can be canonically identified with vectors in the space itself, and vice versa. This means that the tangent spaces at each point are canonically identified with each other and with the vector space itself. This explains how the right-hand side of the above equation can be employed directly, without regard to the spacetime point the metric is to be evaluated and from where (which tangent space) the vectors come from. This situation changes in general relativity. There one has g μ ν ( p ) d x μ | p d x ν | p ( a , b ) = g μ ν ( p ) a μ b ν , {\displaystyle g_{\mu \nu }\!(p)\ {\Bigl .}\operatorname {d} x^{\mu }{\Bigr |}_{p}\ \left.\operatorname {d} x^{\nu }\right|_{p}(a,b)\ =\ g_{\mu \nu }\!(p)\ a^{\mu }\ b^{\nu }\ ,} where now η → g(p), i.e., g is still a metric tensor but now depending on spacetime and is a solution of Einstein's field equations. Moreover, a, b must be tangent vectors at spacetime point p and can no longer be moved around freely. Let x, y ∈ M. Here, Suppose x ∈ M is timelike. Then the simultaneous hyperplane for x is {y : η(x, y) = 0}. Since this hyperplane varies as x varies, there is a relativity of simultaneity in Minkowski space. Generalizations A Lorentzian manifold is a generalization of Minkowski space in two ways. The total number of spacetime dimensions is not restricted to be 4 (2 or more) and a Lorentzian manifold need not be flat, i.e. it allows for curvature. Complexified Minkowski space is defined as Mc = M ⊕ iM. Its real part is the Minkowski space of four-vectors, such as the four-velocity and the four-momentum, which are independent of the choice of orientation of the space. The imaginary part, on the other hand, may consist of four pseudovectors, such as angular velocity and magnetic moment, which change their direction with a change of orientation. A pseudoscalar i is introduced, which also changes sign with a change of orientation. Thus, elements of Mc are independent of the choice of the orientation. The inner product-like structure on Mc is defined as u ⋅ v = η(u, v) for any u,v ∈ Mc. A relativistic pure spin of an electron or any half spin particle is described by ρ ∈ Mc as ρ = u + is, where u is the four-velocity of the particle, satisfying u2 = 1 and s is the 4D spin vector, which is also the Pauli–Lubanski pseudovector satisfying s2 = −1 and u ⋅ s = 0. Minkowski space refers to a mathematical formulation in four dimensions. However, the mathematics can easily be extended or simplified to create an analogous generalized Minkowski space in any number of dimensions. If n ≥ 2, n-dimensional Minkowski space is a vector space of real dimension n on which there is a constant Minkowski metric of signature (n − 1, 1) or (1, n − 1). These generalizations are used in theories where spacetime is assumed to have more or less than 4 dimensions. String theory and M-theory are two examples where n > 4. In string theory there appear conformal field theories with 1 + 1 spacetime dimensions. de Sitter space can be formulated as a submanifold of generalized Minkowski space as can the model spaces of hyperbolic geometry (see below). As a flat spacetime, the three spatial components of Minkowski spacetime always obey the Pythagorean theorem. Minkowski space is a suitable basis for special relativity, a good description of physical systems over finite distances in systems without significant gravitation. However, in order to take gravity into account, physicists use the theory of general relativity, which is formulated in the mathematics of differential geometry of differential manifolds. When this geometry is used as a model of spacetime, it is known as curved spacetime. Even in curved spacetime, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities).[nb 5] More abstractly, it can be said that in the presence of gravity spacetime is described by a curved 4-dimensional manifold for which the tangent space to any point is a 4-dimensional Minkowski space. Thus, the structure of Minkowski space is still essential in the description of general relativity. Geometry The meaning of the term geometry for the Minkowski space depends heavily on the context. Minkowski space is not endowed with Euclidean geometry, and not with any of the generalized Riemannian geometries with intrinsic curvature, those exposed by the model spaces in hyperbolic geometry (negative curvature) and the geometry modeled by the sphere (positive curvature). The reason is the indefiniteness of the Minkowski metric. Minkowski space is, in particular, not a metric space and not a Riemannian manifold with a Riemannian metric. However, Minkowski space contains submanifolds endowed with a Riemannian metric yielding hyperbolic geometry. Model spaces of hyperbolic geometry of low dimension, say 2 or 3, cannot be isometrically embedded in Euclidean space with one more dimension, i.e. R 3 {\displaystyle \mathbb {R} ^{3}} or R 4 {\displaystyle \mathbb {R} ^{4}} respectively, with the Euclidean metric g ¯ {\displaystyle {\overline {g}}} , preventing easy visualization.[nb 6] By comparison, model spaces with positive curvature are just spheres in Euclidean space of one higher dimension. Hyperbolic spaces can be isometrically embedded in spaces of one more dimension when the embedding space is endowed with the Minkowski metric η {\displaystyle \eta } . Define H R 1 ( n ) ⊂ M n + 1 {\displaystyle \mathbf {H} _{R}^{1(n)}\subset \mathbf {M} ^{n+1}} to be the upper sheet ( c t > 0 {\displaystyle ct>0} ) of the hyperboloid H R 1 ( n ) = { ( c t , x 1 , … , x n ) ∈ M n : c 2 t 2 − ( x 1 ) 2 − ⋯ − ( x n ) 2 = R 2 , c t > 0 } {\displaystyle \mathbf {H} _{R}^{1(n)}=\left\{\left(ct,x^{1},\ldots ,x^{n}\right)\in \mathbf {M} ^{n}:c^{2}t^{2}-\left(x^{1}\right)^{2}-\cdots -\left(x^{n}\right)^{2}=R^{2},ct>0\right\}} in generalized Minkowski space M n + 1 {\displaystyle \mathbf {M} ^{n+1}} of spacetime dimension n + 1. {\displaystyle n+1.} This is one of the surfaces of transitivity of the generalized Lorentz group. The induced metric on this submanifold, h R 1 ( n ) = ι ∗ η , {\displaystyle h_{R}^{1(n)}=\iota ^{*}\eta ,} the pullback of the Minkowski metric η {\displaystyle \eta } under inclusion, is a Riemannian metric. With this metric H R 1 ( n ) {\displaystyle \mathbf {H} _{R}^{1(n)}} is a Riemannian manifold. It is one of the model spaces of Riemannian geometry, the hyperboloid model of hyperbolic space. It is a space of constant negative curvature − 1 / R 2 {\displaystyle -1/R^{2}} . The 1 in the upper index refers to an enumeration of the different model spaces of hyperbolic geometry, and the n for its dimension. A 2 ( 2 ) {\displaystyle 2(2)} corresponds to the Poincaré disk model, while 3 ( n ) {\displaystyle 3(n)} corresponds to the Poincaré half-space model of dimension n . {\displaystyle n.} In the definition above ι : H R 1 ( n ) → M n + 1 {\displaystyle \iota :\mathbf {H} _{R}^{1(n)}\rightarrow \mathbf {M} ^{n+1}} is the inclusion map and the superscript star denotes the pullback. The present purpose is to describe this and similar operations as a preparation for the actual demonstration that H R 1 ( n ) {\displaystyle \mathbf {H} _{R}^{1(n)}} actually is a hyperbolic space. Behavior of tensors under inclusion: For inclusion maps from a submanifold S into M and a covariant tensor α of order k on M it holds that ι ∗ α ( X 1 , X 2 , … , X k ) = α ( ι ∗ X 1 , ι ∗ X 2 , … , ι ∗ X k ) = α ( X 1 , X 2 , … , X k ) , {\displaystyle \iota ^{*}\alpha \left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right)=\alpha \left(\iota _{*}X_{1},\,\iota _{*}X_{2},\,\ldots ,\,\iota _{*}X_{k}\right)=\alpha \left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right),} where X1, X1, ..., Xk are vector fields on S. The subscript star denotes the pushforward (to be introduced later), and it is in this special case simply the identity map (as is the inclusion map). The latter equality holds because a tangent space to a submanifold at a point is in a canonical way a subspace of the tangent space of the manifold itself at the point in question. One may simply write ι ∗ α = α | S , {\displaystyle \iota ^{*}\alpha =\alpha |_{S},} meaning (with slight abuse of notation) the restriction of α to accept as input vectors tangent to some s ∈ S only. Pullback of tensors under general maps: The pullback of a covariant k-tensor α (one taking only contravariant vectors as arguments) under a map F: M → N is a linear map F ∗ : T F ( p ) k N → T p k M , {\displaystyle F^{*}\colon T_{F(p)}^{k}N\rightarrow T_{p}^{k}M,} where for any vector space V, T k V = V ∗ ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ k times . {\displaystyle T^{k}V=\underbrace {V^{*}\otimes V^{*}\otimes \cdots \otimes V^{*}} _{k{\text{ times}}}.} It is defined by F ∗ ( α ) ( X 1 , X 2 , … , X k ) = α ( F ∗ X 1 , F ∗ X 2 , … , F ∗ X k ) , {\displaystyle F^{*}(\alpha )\left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right)=\alpha \left(F_{*}X_{1},\,F_{*}X_{2},\,\ldots ,\,F_{*}X_{k}\right),} where the subscript star denotes the pushforward of the map F, and X1, X2, ..., Xk are vectors in TpM. (This is in accord with what was detailed about the pullback of the inclusion map. In the general case here, one cannot proceed as simply because F∗X1 ≠ X1 in general.) The pushforward of vectors under general maps: Heuristically, pulling back a tensor to p ∈ M from F(p) ∈ N feeding it vectors residing at p ∈ M is by definition the same as pushing forward the vectors from p ∈ M to F(p) ∈ N feeding them to the tensor residing at F(p) ∈ N. Further unwinding the definitions, the pushforward F∗: TMp → TNF(p) of a vector field under a map F: M → N between manifolds is defined by F ∗ ( X ) f = X ( f ∘ F ) , {\displaystyle F_{*}(X)f=X(f\circ F),} where f is a function on N. When M = Rm, N= Rn the pushforward of F reduces to DF: Rm → Rn, the ordinary differential, which is given by the Jacobian matrix of partial derivatives of the component functions. The differential is the best linear approximation of a function F from Rm to Rn. The pushforward is the smooth manifold version of this. It acts between tangent spaces, and is in coordinates represented by the Jacobian matrix of the coordinate representation of the function. The corresponding pullback is the dual map from the dual of the range tangent space to the dual of the domain tangent space, i.e. it is a linear map, F ∗ : T F ( p ) ∗ N → T p ∗ M . {\displaystyle F^{*}\colon T_{F(p)}^{*}N\rightarrow T_{p}^{*}M.} In order to exhibit the metric, it is necessary to pull it back via a suitable parametrization. A parametrization of a submanifold S of a manifold M is a map U ⊂ Rm → M whose range is an open subset of S. If S has the same dimension as M, a parametrization is just the inverse of a coordinate map φ: M → U ⊂ Rm. The parametrization to be used is the inverse of hyperbolic stereographic projection. This is illustrated in the figure to the right for n = 2. It is instructive to compare to stereographic projection for spheres. Stereographic projection σ: HnR → Rn and its inverse σ−1: Rn → HnR are given by σ ( τ , x ) = u = R x R + τ , σ − 1 ( u ) = ( τ , x ) = ( R R 2 + | u | 2 R 2 − | u | 2 , 2 R 2 u R 2 − | u | 2 ) , {\displaystyle {\begin{aligned}\sigma (\tau ,\mathbf {x} )=\mathbf {u} &={\frac {R\mathbf {x} }{R+\tau }},\\\sigma ^{-1}(\mathbf {u} )=(\tau ,\mathbf {x} )&=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right),\end{aligned}}} where, for simplicity, τ ≡ ct. The (τ, x) are coordinates on Mn+1 and the u are coordinates on Rn. Let H R n = { ( τ , x 1 , … , x n ) ⊂ M : − τ 2 + ( x 1 ) 2 + ⋯ + ( x n ) 2 = − R 2 , τ > 0 } {\displaystyle \mathbf {H} _{R}^{n}=\left\{\left(\tau ,x^{1},\ldots ,x^{n}\right)\subset \mathbf {M} :-\tau ^{2}+\left(x^{1}\right)^{2}+\cdots +\left(x^{n}\right)^{2}=-R^{2},\tau >0\right\}} and let S = ( − R , 0 , … , 0 ) . {\displaystyle S=(-R,0,\ldots ,0).} If P = ( τ , x 1 , … , x n ) ∈ H R n , {\displaystyle P=\left(\tau ,x^{1},\ldots ,x^{n}\right)\in \mathbf {H} _{R}^{n},} then it is geometrically clear that the vector P S → {\displaystyle {\overrightarrow {PS}}} intersects the hyperplane { ( τ , x 1 , … , x n ) ∈ M : τ = 0 } {\displaystyle \left\{\left(\tau ,x^{1},\ldots ,x^{n}\right)\in M:\tau =0\right\}} once in point denoted U = ( 0 , u 1 ( P ) , … , u n ( P ) ) ≡ ( 0 , u ) . {\displaystyle U=\left(0,u^{1}(P),\ldots ,u^{n}(P)\right)\equiv (0,\mathbf {u} ).} One has S + S U → = U ⇒ S U → = U − S , S + S P → = P ⇒ S P → = P − S {\displaystyle {\begin{aligned}S+{\overrightarrow {SU}}&=U\Rightarrow {\overrightarrow {SU}}=U-S,\\S+{\overrightarrow {SP}}&=P\Rightarrow {\overrightarrow {SP}}=P-S\end{aligned}}} or S U → = ( 0 , u ) − ( − R , 0 ) = ( R , u ) , S P → = ( τ , x ) − ( − R , 0 ) = ( τ + R , x ) . . {\displaystyle {\begin{aligned}{\overrightarrow {SU}}&=(0,\mathbf {u} )-(-R,\mathbf {0} )=(R,\mathbf {u} ),\\{\overrightarrow {SP}}&=(\tau ,\mathbf {x} )-(-R,\mathbf {0} )=(\tau +R,\mathbf {x} ).\end{aligned}}.} By construction of stereographic projection one has S U → = λ ( τ ) S P → . {\displaystyle {\overrightarrow {SU}}=\lambda (\tau ){\overrightarrow {SP}}.} This leads to the system of equations R = λ ( τ + R ) , u = λ x . {\displaystyle {\begin{aligned}R&=\lambda (\tau +R),\\\mathbf {u} &=\lambda \mathbf {x} .\end{aligned}}} The first of these is solved for λ and one obtains for stereographic projection σ ( τ , x ) = u = R x R + τ . {\displaystyle \sigma (\tau ,\mathbf {x} )=\mathbf {u} ={\frac {R\mathbf {x} }{R+\tau }}.} Next, the inverse σ−1(u) = (τ, x) must be calculated. Use the same considerations as before, but now with U = ( 0 , u ) P = ( τ ( u ) , x ( u ) ) . , {\displaystyle {\begin{aligned}U&=(0,\mathbf {u} )\\P&=(\tau (\mathbf {u} ),\mathbf {x} (\mathbf {u} )).\end{aligned}},} one gets τ = R ( 1 − λ ) λ , x = u λ , {\displaystyle {\begin{aligned}\tau &={\frac {R(1-\lambda )}{\lambda }},\\\mathbf {x} &={\frac {\mathbf {u} }{\lambda }},\end{aligned}}} but now with λ depending on u. The condition for P lying in the hyperboloid is − τ 2 + | x | 2 = − R 2 , {\displaystyle -\tau ^{2}+|\mathbf {x} |^{2}=-R^{2},} or − R 2 ( 1 − λ ) 2 λ 2 + | u | 2 λ 2 = − R 2 , {\displaystyle -{\frac {R^{2}(1-\lambda )^{2}}{\lambda ^{2}}}+{\frac {|\mathbf {u} |^{2}}{\lambda ^{2}}}=-R^{2},} leading to λ = R 2 − | u | 2 2 R 2 . {\displaystyle \lambda ={\frac {R^{2}-|u|^{2}}{2R^{2}}}.} With this λ, one obtains σ − 1 ( u ) = ( τ , x ) = ( R R 2 + | u | 2 R 2 − | u | 2 , 2 R 2 u R 2 − | u | 2 ) . {\displaystyle \sigma ^{-1}(\mathbf {u} )=(\tau ,\mathbf {x} )=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right).} One has h R 1 ( n ) = η | H R 1 ( n ) = ( d x 1 ) 2 + ⋯ + ( d x n ) 2 − d τ 2 {\displaystyle h_{R}^{1(n)}=\eta |_{\mathbf {H} _{R}^{1(n)}}=\left(dx^{1}\right)^{2}+\cdots +\left(dx^{n}\right)^{2}-d\tau ^{2}} and the map σ − 1 : R n → H R 1 ( n ) ; σ − 1 ( u ) = ( τ ( u ) , x ( u ) ) = ( R R 2 + | u | 2 R 2 − | u | 2 , 2 R 2 u R 2 − | u | 2 ) . {\displaystyle \sigma ^{-1}:\mathbf {R} ^{n}\rightarrow \mathbf {H} _{R}^{1(n)};\quad \sigma ^{-1}(\mathbf {u} )=(\tau (\mathbf {u} ),\,\mathbf {x} (\mathbf {u} ))=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},\,{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right).} The pulled back metric can be obtained by straightforward methods of calculus; ( σ − 1 ) ∗ η | H R 1 ( n ) = ( d x 1 ( u ) ) 2 + ⋯ + ( d x n ( u ) ) 2 − ( d τ ( u ) ) 2 . {\displaystyle \left.\left(\sigma ^{-1}\right)^{*}\eta \right|_{\mathbf {H} _{R}^{1(n)}}=\left(dx^{1}(\mathbf {u} )\right)^{2}+\cdots +\left(dx^{n}(\mathbf {u} )\right)^{2}-\left(d\tau (\mathbf {u} )\right)^{2}.} One computes according to the standard rules for computing differentials (though one is really computing the rigorously defined exterior derivatives), d x 1 ( u ) = d ( 2 R 2 u 1 R 2 − | u | 2 ) = ∂ ∂ u 1 2 R 2 u 1 R 2 − | u | 2 d u 1 + ⋯ + ∂ ∂ u n 2 R 2 u 1 R 2 − | u | 2 d u n + ∂ ∂ τ 2 R 2 u 1 R 2 − | u | 2 d τ , ⋮ d x n ( u ) = d ( 2 R 2 u n R 2 − | u | 2 ) = ⋯ , d τ ( u ) = d ( R R 2 + | u | 2 R 2 − | u | 2 ) = ⋯ , {\displaystyle {\begin{aligned}dx^{1}(\mathbf {u} )&=d\left({\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}\right)={\frac {\partial }{\partial u^{1}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{1}+\cdots +{\frac {\partial }{\partial u^{n}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{n}+{\frac {\partial }{\partial \tau }}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}d\tau ,\\&\ \ \vdots \\dx^{n}(\mathbf {u} )&=d\left({\frac {2R^{2}u^{n}}{R^{2}-|u|^{2}}}\right)=\cdots ,\\d\tau (\mathbf {u} )&=d\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}}\right)=\cdots ,\end{aligned}}} and substitutes the results into the right hand side. This yields ( σ − 1 ) ∗ h R 1 ( n ) = 4 R 2 [ ( d u 1 ) 2 + ⋯ + ( d u n ) 2 ] ( R 2 − | u | 2 ) 2 ≡ h R 2 ( n ) . {\displaystyle \left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}={\frac {4R^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{2}}}\equiv h_{R}^{2(n)}.} One has ∂ ∂ u 1 2 R 2 u 1 R 2 − | u | 2 d u 1 = 2 ( R 2 − | u | 2 ) + 4 R 2 ( u 1 ) 2 ( R 2 − | u | 2 ) 2 d u 1 , ∂ ∂ u 2 2 R 2 u 1 R 2 − | u | 2 d u 2 = 4 R 2 u 1 u 2 ( R 2 − | u | 2 ) 2 d u 2 , {\displaystyle {\begin{aligned}{\frac {\partial }{\partial u^{1}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{1}&={\frac {2\left(R^{2}-|u|^{2}\right)+4R^{2}\left(u^{1}\right)^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}du^{1},\\{\frac {\partial }{\partial u^{2}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{2}&={\frac {4R^{2}u^{1}u^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}du^{2},\end{aligned}}} and ∂ ∂ τ 2 R 2 u 1 R 2 − | u | 2 d τ 2 = 0. {\displaystyle {\frac {\partial }{\partial \tau }}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}d\tau ^{2}=0.} With this one may write d x 1 ( u ) = 2 R 2 ( R 2 − | u | 2 ) d u 1 + 4 R 2 u 1 ( u ⋅ d u ) ( R 2 − | u | 2 ) 2 , {\displaystyle dx^{1}(\mathbf {u} )={\frac {2R^{2}\left(R^{2}-|u|^{2}\right)du^{1}+4R^{2}u^{1}(\mathbf {u} \cdot d\mathbf {u} )}{\left(R^{2}-|u|^{2}\right)^{2}}},} from which ( d x 1 ( u ) ) 2 = 4 R 2 ( r 2 − | u | 2 ) 2 ( d u 1 ) 2 + 16 R 4 ( R 2 − | u | 2 ) ( u ⋅ d u ) u 1 d u 1 + 16 R 4 ( u 1 ) 2 ( u ⋅ d u ) 2 ( R 2 − | u | 2 ) 4 . {\displaystyle \left(dx^{1}(\mathbf {u} )\right)^{2}={\frac {4R^{2}\left(r^{2}-|u|^{2}\right)^{2}\left(du^{1}\right)^{2}+16R^{4}\left(R^{2}-|u|^{2}\right)\left(\mathbf {u} \cdot d\mathbf {u} \right)u^{1}du^{1}+16R^{4}\left(u^{1}\right)^{2}\left(\mathbf {u} \cdot d\mathbf {u} \right)^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}.} Summing this formula one obtains ( d x 1 ( u ) ) 2 + ⋯ + ( d x n ( u ) ) 2 = 4 R 2 ( R 2 − | u | 2 ) 2 [ ( d u 1 ) 2 + ⋯ + ( d u n ) 2 ] + 16 R 4 ( R 2 − | u | 2 ) ( u ⋅ d u ) ( u ⋅ d u ) + 16 R 4 | u | 2 ( u ⋅ d u ) 2 ( R 2 − | u | 2 ) 4 = 4 R 2 ( R 2 − | u | 2 ) 2 [ ( d u 1 ) 2 + ⋯ + ( d u n ) 2 ] ( R 2 − | u | 2 ) 4 + R 2 16 R 4 ( u ⋅ d u ) ( R 2 − | u | 2 ) 4 . {\displaystyle {\begin{aligned}&\left(dx^{1}(\mathbf {u} )\right)^{2}+\cdots +\left(dx^{n}(\mathbf {u} )\right)^{2}\\={}&{\frac {4R^{2}\left(R^{2}-|u|^{2}\right)^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]+16R^{4}\left(R^{2}-|u|^{2}\right)(\mathbf {u} \cdot d\mathbf {u} )(\mathbf {u} \cdot d\mathbf {u} )+16R^{4}|u|^{2}(\mathbf {u} \cdot d\mathbf {u} )^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}\\={}&{\frac {4R^{2}\left(R^{2}-|u|^{2}\right)^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{4}}}+R^{2}{\frac {16R^{4}(\mathbf {u} \cdot d\mathbf {u} )}{\left(R^{2}-|u|^{2}\right)^{4}}}.\end{aligned}}} Similarly, for τ one gets d τ = ∑ i = 1 n ∂ ∂ u i R R 2 + | u | 2 R 2 + | u | 2 d u i + ∂ ∂ τ R R 2 + | u | 2 R 2 + | u | 2 d τ = ∑ i = 1 n R 4 4 R 2 u i d u i ( R 2 − | u | 2 ) , {\displaystyle d\tau =\sum _{i=1}^{n}{\frac {\partial }{\partial u^{i}}}R{\frac {R^{2}+|u|^{2}}{R^{2}+|u|^{2}}}du^{i}+{\frac {\partial }{\partial \tau }}R{\frac {R^{2}+|u|^{2}}{R^{2}+|u|^{2}}}d\tau =\sum _{i=1}^{n}R^{4}{\frac {4R^{2}u^{i}du^{i}}{\left(R^{2}-|u|^{2}\right)}},} yielding − d τ 2 = − ( R 4 R 4 ( u ⋅ d u ) ( R 2 − | u | 2 ) 2 ) 2 = − R 2 16 R 4 ( u ⋅ d u ) 2 ( R 2 − | u | 2 ) 4 . {\displaystyle -d\tau ^{2}=-\left(R{\frac {4R^{4}\left(\mathbf {u} \cdot d\mathbf {u} \right)}{\left(R^{2}-|u|^{2}\right)^{2}}}\right)^{2}=-R^{2}{\frac {16R^{4}(\mathbf {u} \cdot d\mathbf {u} )^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}.} Now add this contribution to finally get ( σ − 1 ) ∗ h R 1 ( n ) = 4 R 2 [ ( d u 1 ) 2 + ⋯ + ( d u n ) 2 ] ( R 2 − | u | 2 ) 2 ≡ h R 2 ( n ) . {\displaystyle \left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}={\frac {4R^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{2}}}\equiv h_{R}^{2(n)}.} This last equation shows that the metric on the ball is identical to the Riemannian metric h2(n)R in the Poincaré ball model, another standard model of hyperbolic geometry. The pullback can be computed in a different fashion. By definition, ( σ − 1 ) ∗ h R 1 ( n ) ( V , V ) = h R 1 ( n ) ( ( σ − 1 ) ∗ V , ( σ − 1 ) ∗ V ) = η | H R 1 ( n ) ( ( σ − 1 ) ∗ V , ( σ − 1 ) ∗ V ) . {\displaystyle \left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}(V,\,V)=h_{R}^{1(n)}\left(\left(\sigma ^{-1}\right)_{*}V,\,\left(\sigma ^{-1}\right)_{*}V\right)=\eta |_{\mathbf {H} _{R}^{1(n)}}\left(\left(\sigma ^{-1}\right)_{*}V,\,\left(\sigma ^{-1}\right)_{*}V\right).} In coordinates, ( σ − 1 ) ∗ V = ( σ − 1 ) ∗ V i ∂ ∂ u i = V i ∂ x j ∂ u i ∂ ∂ x j + V i ∂ τ ∂ u i ∂ ∂ τ = V i ∂ x j ∂ u i ∂ ∂ x j + V i ∂ τ ∂ u i ∂ ∂ τ = V x j ∂ ∂ x j + V τ ∂ ∂ τ . {\displaystyle \left(\sigma ^{-1}\right)_{*}V=\left(\sigma ^{-1}\right)_{*}V^{i}{\frac {\partial }{\partial u^{i}}}=V^{i}{\frac {\partial x^{j}}{\partial u^{i}}}{\frac {\partial }{\partial x^{j}}}+V^{i}{\frac {\partial \tau }{\partial u^{i}}}{\frac {\partial }{\partial \tau }}=V^{i}{\frac {\partial }{x}}^{j}{\partial u^{i}}{\frac {\partial }{\partial x^{j}}}+V^{i}{\frac {\partial }{\tau }}{\partial u^{i}}{\frac {\partial }{\partial \tau }}=Vx^{j}{\frac {\partial }{\partial x^{j}}}+V\tau {\frac {\partial }{\partial \tau }}.} One has from the formula for σ–1 V x j = V i ∂ ∂ u i ( 2 R 2 u j R 2 − | u | 2 ) = 2 R 2 V j R 2 − | u | 2 − 4 R 2 u j ⟨ V , u ⟩ ( R 2 − | u | 2 ) 2 , ( here V | u | 2 = 2 ∑ k = 1 n V k u k ≡ 2 ⟨ V , u ⟩ ) V τ = V ( R R 2 + | u | 2 R 2 − | u | 2 ) = 4 R 3 ⟨ V , u ⟩ ( R 2 − | u | 2 ) 2 . {\displaystyle {\begin{aligned}Vx^{j}&=V^{i}{\frac {\partial }{\partial u^{i}}}\left({\frac {2R^{2}u^{j}}{R^{2}-|u|^{2}}}\right)={\frac {2R^{2}V^{j}}{R^{2}-|u|^{2}}}-{\frac {4R^{2}u^{j}\langle \mathbf {V} ,\,\mathbf {u} \rangle }{\left(R^{2}-|u|^{2}\right)^{2}}},\quad \left({\text{here }}V|u|^{2}=2\sum _{k=1}^{n}V^{k}u^{k}\equiv 2\langle \mathbf {V} ,\,\mathbf {u} \rangle \right)\\V\tau &=V\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}}\right)={\frac {4R^{3}\langle \mathbf {V} ,\,\mathbf {u} \rangle }{\left(R^{2}-|u|^{2}\right)^{2}}}.\end{aligned}}} Lastly, η ( σ ∗ − 1 V , σ ∗ − 1 V ) = ∑ j = 1 n ( V x j ) 2 − ( V τ ) 2 = 4 R 4 | V | 2 ( R 2 − | u | 2 ) 2 = h R 2 ( n ) ( V , z , V ) , {\displaystyle \eta \left(\sigma _{*}^{-1}V,\,\sigma _{*}^{-1}V\right)=\sum _{j=1}^{n}\left(Vx^{j}\right)^{2}-(V\tau )^{2}={\frac {4R^{4}|V|^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}=h_{R}^{2(n)}(V,z,V),} and the same conclusion is reached. See also Remarks Notes References External links Media related to Minkowski diagrams at Wikimedia Commons |
======================================== |
[SOURCE: https://www.bbc.com/news/articles/cx2gn239exlo] | [TOKENS: 4249] |
How dark web agent spotted bedroom wall clue to rescue girl from years of harm4 days agoShareSaveSam PirantyBBC Eye Investigations ShareSaveBBCWarning: This article contains details about sexual abuseSpecialist online investigator Greg Squire had hit a dead end in his efforts to rescue an abused girl his team had named Lucy.Disturbing images of her were being shared on the dark web - an encrypted corner of the internet only accessible using special software designed to make owners digitally untraceable.But even with that level of subterfuge, the abuser was conscious of "covering their tracks", cropping or altering any identifying features, says Squire. It was impossible to work out who, or where, Lucy was.What he was soon to discover was that the clue to the 12-year-old's location was hidden in plain sight.Squire works for US Department of Homeland Security Investigations in an elite unit which attempts to identify children appearing in sexual abuse material.A BBC World Service team has spent five years filming with Squire, and other investigative units in Portugal, Brazil, and Russia - showing them solving cases such as that of a kidnapped and presumed-dead seven-year-old in Russia, and the arrest of a Brazilian man responsible for five of the biggest child-abuse forums on the dark web.The unprecedented access shows how these cases are often cracked, not through state-of-the-art technology, but by spotting tiny revealing details in images or chat forums.Squire and his team monitor dark web chatrooms around the clock to watch for any clues that could identify and locate abused children Squire cites Lucy's case, which he tackled early in his career, as the inspiration for his long-term dedication.He found it especially disturbing that Lucy was about the same age as his own daughter, and new photos of her being assaulted, seemingly in her bedroom, were constantly appearing.Squire and his team could see, from the type of light sockets and electrical outlets visible in the images, that Lucy was in North America. But that was about it.They contacted Facebook, which at the time dominated the social media landscape, asking for help scouring uploaded family photos - to see if Lucy was in any of them. But Facebook, despite having facial recognition technology, said it "did not have the tools" to help.So Squire and his colleagues analysed everything they could see in Lucy's room: the bedspread, her outfits, her stuffed toys. Looking for any element which might help.And then they had a minor breakthrough. The team discovered that a sofa seen in some of the images was only sold regionally, not nationally, and therefore had a more limited customer base.But that still amounted to about 40,000 people."At that point in the investigation, we're [still] looking at 29 states here in the US. I mean, you're talking about tens of thousands of addresses, and that's a very, very daunting task," says Squire.If you are not in the UK, watch on YouTube or listen to the World of Secrets podcast hereThe team looked for more clues. And that is when they realised something as mundane as the exposed brick wall in Lucy's bedroom could give them a lead."So, I started just Googling bricks and it wasn't too many searches [before] I found the Brick Industry Association," says Squire."And the woman on the phone was awesome. She was like, 'how can the brick industry help?'"She offered to share the photo with brick experts all over the country. The response was almost immediate, he says.One of the people who got in touch was John Harp, who had been working in brick sales since 1981."I noticed that the brick was a very pink-cast brick, and it had a little bit of a charcoal overlay on it. It was a modular eight-inch brick and it was square-edged," he says. "When I saw that, I knew exactly what the brick was," he adds. It was, he told Squire, a "Flaming Alamo"."[Our company] made that brick from the late 60s through about the middle part of the 80s, and I had sold millions of bricks from that plant."John Harp was able to identify the type of brick in the wall shown behind Lucy Initially Squire was ecstatic, expecting they could access a digitised customer list. But Harp broke the news that the sales records were just a "pile of notes" that went back decades.He did however reveal a key detail about bricks, Squire says."He goes: 'Bricks are heavy.' And he said: 'So heavy bricks don't go very far.'"This changed everything. The team returned to the sofa customer list and narrowed that down to just those clients who lived within a 100-mile radius of Harp's brick factory in the US' south-west.From that list of 40 or 50 people, it was easy to find and trawl their social media. And that is when they found a photo of Lucy on Facebook with an adult who looked as though she was close to the girl - possibly a relative.They worked out the woman's address, and then used that to find out every other address connected with that person, and all the people they had ever lived with.That narrowed Lucy's possible address down further - but they didn't want to go door to door, making enquiries. Get the address wrong, and they could risk the suspect being tipped off that he was on the authorities' radar.So Squire and his colleagues began sending photos of these houses to John Harp, the brick expert.Squire at his home in New Hampshire - he found it very disturbing that Lucy was a similar age to his own daughter Flaming Alamos were not visible on the outside of any of the homes, because the properties were clad in other materials. But the team asked Harp to assess - by looking at their style and exterior - if these properties were likely to have been built during a period when Flaming Alamos had been on sale."We would basically take a screenshot of that house or residence and shoot it over to John and say 'would this house have these bricks inside?'" says Squire.Finally they had a breakthrough. They found an address that Harp believed was likely to feature a Flaming Alamo brick wall, and was on the sofa customer-base list."So we narrowed it down to [this] one address… and started the process of confirming who was living there through state records, driver's licence… information on schools," says Squire.The team realised that in the household with Lucy was her mother's boyfriend - a convicted sex offender.Within hours, local Homeland Security agents had arrested the offender, who had been raping Lucy for six years. He was subsequently sentenced to more than 70 years in jail.Brick expert Harp was delighted to hear Lucy was safe, especially given his own experiences as a long-term foster parent."We've had over 150 different children in our home. We've adopted three. So, doing that over those years, we have a lot of children in our home that were [previously] abused," he said."What [Squire's team] do day in and day out, and what they see, is a magnification of hundreds of times of what I've seen or had to deal with."Squire has struggled with his mental health as a result of his work A few years ago, that pressure on Squire started to take a real toll on his mental health, and he admits that, when he wasn't working, "alcohol was a bigger part of my life than it should have been"."At that point my kids were a bit older… and, you know, that almost enables you to push harder. Like… 'I bet if I get up at three this morning, I can surprise [a perpetrator] online.'"But meanwhile, personally… 'Who's Greg? I don't even know what he likes to do.' All of your friends… during the day, you know, they're criminals… All they do is talk about the most horrific things all day long."Not long afterwards, his marriage broke down, and he says he began to have suicidal thoughts.It was his colleague Pete Manning who encouraged him to seek help after noticing his friend seemed to be struggling."I feel honoured to be part of the team that can make a difference," says Squire, pictured with friend and colleague Pete Manning"It's hard when the thing that brings you so much energy and drive is also the thing that's slowly destroying you," Manning says.Squire says exposing his vulnerabilities to the light was the first step to getting better and continuing to do a job he is proud of."I feel honoured to be part of the team that can make a difference instead of watching it on TV or hearing about it… I'd rather be right in there in the fight trying to stop it."Last summer Greg met Lucy, now in her 20s, for the first time.Lucy (left), now an adult, told Squire she had been praying help would comeShe told him her ability to now discuss what she went through was testament to the support she has around her."I have more stability. I'm able to have the energy to talk to people [about the abuse], which I could not have done… even, like, a couple years ago."She said at the point Homeland Security ended her abuse she had been "praying actively for it to end"."Not to sound cliché, but it was a prayer answered."Squire told her he wished he had been able to communicate that help was on its way."You wish there was some telepathy and you could reach out and be like, 'listen, we're coming'."The BBC asked Facebook why it couldn't use its facial recognition technology to assist the hunt for Lucy. It responded: "To protect user privacy, it's important that we follow the appropriate legal process, but we work to support law enforcement as much as we can."If you've been a victim of child sexual abuse, a victim of crime or have feelings of despair, and are in the UK, you'll find details of help and support at bbc.co.uk/actionline.Sexual violenceChild abuseUS Department of Homeland SecurityDark web How dark web agent spotted bedroom wall clue to rescue girl from years of harm Warning: This article contains details about sexual abuse Specialist online investigator Greg Squire had hit a dead end in his efforts to rescue an abused girl his team had named Lucy. Disturbing images of her were being shared on the dark web - an encrypted corner of the internet only accessible using special software designed to make owners digitally untraceable. But even with that level of subterfuge, the abuser was conscious of "covering their tracks", cropping or altering any identifying features, says Squire. It was impossible to work out who, or where, Lucy was. What he was soon to discover was that the clue to the 12-year-old's location was hidden in plain sight. Squire works for US Department of Homeland Security Investigations in an elite unit which attempts to identify children appearing in sexual abuse material. A BBC World Service team has spent five years filming with Squire, and other investigative units in Portugal, Brazil, and Russia - showing them solving cases such as that of a kidnapped and presumed-dead seven-year-old in Russia, and the arrest of a Brazilian man responsible for five of the biggest child-abuse forums on the dark web. The unprecedented access shows how these cases are often cracked, not through state-of-the-art technology, but by spotting tiny revealing details in images or chat forums. Squire cites Lucy's case, which he tackled early in his career, as the inspiration for his long-term dedication. He found it especially disturbing that Lucy was about the same age as his own daughter, and new photos of her being assaulted, seemingly in her bedroom, were constantly appearing. Squire and his team could see, from the type of light sockets and electrical outlets visible in the images, that Lucy was in North America. But that was about it. They contacted Facebook, which at the time dominated the social media landscape, asking for help scouring uploaded family photos - to see if Lucy was in any of them. But Facebook, despite having facial recognition technology, said it "did not have the tools" to help. So Squire and his colleagues analysed everything they could see in Lucy's room: the bedspread, her outfits, her stuffed toys. Looking for any element which might help. And then they had a minor breakthrough. The team discovered that a sofa seen in some of the images was only sold regionally, not nationally, and therefore had a more limited customer base. But that still amounted to about 40,000 people. "At that point in the investigation, we're [still] looking at 29 states here in the US. I mean, you're talking about tens of thousands of addresses, and that's a very, very daunting task," says Squire. The team looked for more clues. And that is when they realised something as mundane as the exposed brick wall in Lucy's bedroom could give them a lead. "So, I started just Googling bricks and it wasn't too many searches [before] I found the Brick Industry Association," says Squire. "And the woman on the phone was awesome. She was like, 'how can the brick industry help?'" She offered to share the photo with brick experts all over the country. The response was almost immediate, he says. One of the people who got in touch was John Harp, who had been working in brick sales since 1981. "I noticed that the brick was a very pink-cast brick, and it had a little bit of a charcoal overlay on it. It was a modular eight-inch brick and it was square-edged," he says. "When I saw that, I knew exactly what the brick was," he adds. It was, he told Squire, a "Flaming Alamo". "[Our company] made that brick from the late 60s through about the middle part of the 80s, and I had sold millions of bricks from that plant." Initially Squire was ecstatic, expecting they could access a digitised customer list. But Harp broke the news that the sales records were just a "pile of notes" that went back decades. He did however reveal a key detail about bricks, Squire says. "He goes: 'Bricks are heavy.' And he said: 'So heavy bricks don't go very far.'" This changed everything. The team returned to the sofa customer list and narrowed that down to just those clients who lived within a 100-mile radius of Harp's brick factory in the US' south-west. From that list of 40 or 50 people, it was easy to find and trawl their social media. And that is when they found a photo of Lucy on Facebook with an adult who looked as though she was close to the girl - possibly a relative. They worked out the woman's address, and then used that to find out every other address connected with that person, and all the people they had ever lived with. That narrowed Lucy's possible address down further - but they didn't want to go door to door, making enquiries. Get the address wrong, and they could risk the suspect being tipped off that he was on the authorities' radar. So Squire and his colleagues began sending photos of these houses to John Harp, the brick expert. Flaming Alamos were not visible on the outside of any of the homes, because the properties were clad in other materials. But the team asked Harp to assess - by looking at their style and exterior - if these properties were likely to have been built during a period when Flaming Alamos had been on sale. "We would basically take a screenshot of that house or residence and shoot it over to John and say 'would this house have these bricks inside?'" says Squire. Finally they had a breakthrough. They found an address that Harp believed was likely to feature a Flaming Alamo brick wall, and was on the sofa customer-base list. "So we narrowed it down to [this] one address… and started the process of confirming who was living there through state records, driver's licence… information on schools," says Squire. The team realised that in the household with Lucy was her mother's boyfriend - a convicted sex offender. Within hours, local Homeland Security agents had arrested the offender, who had been raping Lucy for six years. He was subsequently sentenced to more than 70 years in jail. Brick expert Harp was delighted to hear Lucy was safe, especially given his own experiences as a long-term foster parent. "We've had over 150 different children in our home. We've adopted three. So, doing that over those years, we have a lot of children in our home that were [previously] abused," he said. "What [Squire's team] do day in and day out, and what they see, is a magnification of hundreds of times of what I've seen or had to deal with." A few years ago, that pressure on Squire started to take a real toll on his mental health, and he admits that, when he wasn't working, "alcohol was a bigger part of my life than it should have been". "At that point my kids were a bit older… and, you know, that almost enables you to push harder. Like… 'I bet if I get up at three this morning, I can surprise [a perpetrator] online.' "But meanwhile, personally… 'Who's Greg? I don't even know what he likes to do.' All of your friends… during the day, you know, they're criminals… All they do is talk about the most horrific things all day long." Not long afterwards, his marriage broke down, and he says he began to have suicidal thoughts. It was his colleague Pete Manning who encouraged him to seek help after noticing his friend seemed to be struggling. "It's hard when the thing that brings you so much energy and drive is also the thing that's slowly destroying you," Manning says. Squire says exposing his vulnerabilities to the light was the first step to getting better and continuing to do a job he is proud of. "I feel honoured to be part of the team that can make a difference instead of watching it on TV or hearing about it… I'd rather be right in there in the fight trying to stop it." Last summer Greg met Lucy, now in her 20s, for the first time. She told him her ability to now discuss what she went through was testament to the support she has around her. "I have more stability. I'm able to have the energy to talk to people [about the abuse], which I could not have done… even, like, a couple years ago." She said at the point Homeland Security ended her abuse she had been "praying actively for it to end". "Not to sound cliché, but it was a prayer answered." Squire told her he wished he had been able to communicate that help was on its way. "You wish there was some telepathy and you could reach out and be like, 'listen, we're coming'." The BBC asked Facebook why it couldn't use its facial recognition technology to assist the hunt for Lucy. It responded: "To protect user privacy, it's important that we follow the appropriate legal process, but we work to support law enforcement as much as we can." If you've been a victim of child sexual abuse, a victim of crime or have feelings of despair, and are in the UK, you'll find details of help and support at bbc.co.uk/actionline. Concern over plan to close rape referral centres Rape investigation wait time reduced by 40% 'Monster' taxi driver who raped girl jailed Tech firms will have 48 hours to remove abusive images under new law The government is proposing that intimate image abuse should be treated more severely. Child abuse increasing and more complex to police, crime agency says Police cannot tackle the issue alone, the NCA says, and technology companies must do more. US Homeland Security spokeswoman Tricia McLaughlin to leave post McClaughlin has vigorously promoted and defended Trump's deportation push across social media and in interviews. Paedophile jailed for dark web baby abuse videos Joao Teixeira, who shared videos of babies being abused, has been jailed for more than 11 years. Ex-church minister who admitted child sexual abuse to BBC still free years later Robert Corfield, who admitted abusing a child to the BBC, was a minister in a shadowy church known as The Truth. Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/World#cite_note-Zeilik25-16] | [TOKENS: 5641] |
Contents World The world is the totality of entities, the whole of reality, or everything that exists. The nature of the world has been conceptualized differently in different fields. Some conceptions see the world as unique, while others talk of a "plurality of worlds". Some treat the world as one simple object, while others analyze the world as a complex made up of parts. In scientific cosmology, the world or universe is commonly defined as "the totality of all space and time; all that is, has been, and will be". Theories of modality talk of possible worlds as complete and consistent ways how things could have been. Phenomenology, starting from the horizon of co-given objects present in the periphery of every experience, defines the world as the biggest horizon, or the "horizon of all horizons". In philosophy of mind, the world is contrasted with the mind as that which is represented by the mind. Theology conceptualizes the world in relation to God, for example, as God's creation, as identical to God, or as the two being interdependent. In religions, there is a tendency to downgrade the material or sensory world in favor of a spiritual world to be sought through religious practice. A comprehensive representation of the world and our place in it, as is found in religions, is known as a worldview. Cosmogony is the field that studies the origin or creation of the world, while eschatology refers to the science or doctrine of the last things or of the end of the world. In various contexts, the term "world" takes a more restricted meaning associated, for example, with the Earth and all life on it, with humanity as a whole, or with an international or intercontinental scope. In this sense, world history refers to the history of humanity as a whole, and world politics is the discipline of political science studying issues that transcend nations and continents. Other examples include terms such as "world religion", "world language", "world government", "world war", "world population", "world economy", or "world championship". Etymology The English word world comes from the Old English weorold. The Old English is a reflex of the Common Germanic *weraldiz, a compound of weraz 'man' and aldiz 'age', thus literally meaning roughly 'age of man'; this word led to Old Frisian warld, Old Saxon werold, Old Dutch werolt, Old High German weralt, and Old Norse verǫld. The corresponding word in Latin is mundus, literally 'clean, elegant', itself a loan translation of Greek cosmos 'orderly arrangement'. While the Germanic word thus reflects a mythological notion of a "domain of Man" (compare Midgard), presumably as opposed to the divine sphere on the one hand and the chthonic sphere of the underworld on the other, the Greco-Latin term expresses a notion of creation as an act of establishing order out of chaos. Conceptions Different fields often work with quite different conceptions of the essential features associated with the term "world". Some conceptions see the world as unique: there can be no more than one world. Others talk of a "plurality of worlds". Some see worlds as complex things composed of many substances as their parts while others hold that worlds are simple in the sense that there is only one substance: the world as a whole. Some characterize worlds in terms of objective spacetime while others define them relative to the horizon present in each experience. These different characterizations are not always exclusive: it may be possible to combine some without leading to a contradiction. Most of them agree that worlds are unified totalities. Monism is a thesis about oneness: that only one thing exists in a certain sense. The denial of monism is pluralism, the thesis that, in a certain sense, more than one thing exists. There are many forms of monism and pluralism, but in relation to the world as a whole, two are of special interest: existence monism/pluralism and priority monism/pluralism. Existence monism states that the world is the only concrete object there is. This means that all the concrete "objects" we encounter in our daily lives, including apples, cars and ourselves, are not truly objects in a strict sense. Instead, they are just dependent aspects of the world-object. Such a world-object is simple in the sense that it does not have any genuine parts. For this reason, it has also been referred to as "blobject" since it lacks an internal structure like a blob. Priority monism allows that there are other concrete objects besides the world. But it holds that these objects do not have the most fundamental form of existence, that they somehow depend on the existence of the world. The corresponding forms of pluralism state that the world is complex in the sense that it is made up of concrete, independent objects. Scientific cosmology can be defined as the science of the universe as a whole. In it, the terms "universe" and "cosmos" are usually used as synonyms for the term "world". One common definition of the world/universe found in this field is as "[t]he totality of all space and time; all that is, has been, and will be". Some definitions emphasize that there are two other aspects to the universe besides spacetime: forms of energy or matter, like stars and particles, and laws of nature. World-conceptions in this field differ both concerning their notion of spacetime and of the contents of spacetime. The theory of relativity plays a central role in modern cosmology and its conception of space and time. A difference from its predecessors is that it conceives space and time not as distinct dimensions but as a single four-dimensional manifold called spacetime. This can be seen in special relativity in relation to the Minkowski metric, which includes both spatial and temporal components in its definition of distance. General relativity goes one step further by integrating the concept of mass into the concept of spacetime as its curvature. Quantum cosmology uses a classical notion of spacetime and conceives the whole world as one big wave function expressing the probability of finding particles in a given location. The world-concept plays a role in many modern theories of modality, sometimes in the form of possible worlds. A possible world is a complete and consistent way how things could have been. The actual world is a possible world since the way things are is a way things could have been. There are many other ways things could have been besides how they actually are. For example, Hillary Clinton did not win the 2016 US election, but she could have won. So there is a possible world in which she did. There is a vast number of possible worlds, one corresponding to each such difference, no matter how small or big, as long as no outright contradictions are introduced this way. Possible worlds are often conceived as abstract objects, for example, in terms of non-obtaining states of affairs or as maximally consistent sets of propositions. On such a view, they can even be seen as belonging to the actual world. Another way to conceive possible worlds, made famous by David Lewis, is as concrete entities. On this conception, there is no important difference between the actual world and possible worlds: both are conceived as concrete, inclusive and spatiotemporally connected. The only difference is that the actual world is the world we live in, while other possible worlds are not inhabited by us but by our counterparts. Everything within a world is spatiotemporally connected to everything else but the different worlds do not share a common spacetime: They are spatiotemporally isolated from each other. This is what makes them separate worlds. It has been suggested that, besides possible worlds, there are also impossible worlds. Possible worlds are ways things could have been, so impossible worlds are ways things could not have been. Such worlds involve a contradiction, like a world in which Hillary Clinton both won and lost the 2016 US election. Both possible and impossible worlds have in common the idea that they are totalities of their constituents. Within phenomenology, worlds are defined in terms of horizons of experiences. When we perceive an object, like a house, we do not just experience this object at the center of our attention but also various other objects surrounding it, given in the periphery. The term "horizon" refers to these co-given objects, which are usually experienced only in a vague, indeterminate manner. The perception of a house involves various horizons, corresponding to the neighborhood, the city, the country, the Earth, etc. In this context, the world is the biggest horizon or the "horizon of all horizons". It is common among phenomenologists to understand the world not just as a spatiotemporal collection of objects but as additionally incorporating various other relations between these objects. These relations include, for example, indication-relations that help us anticipate one object given the appearances of another object and means-end-relations or functional involvements relevant for practical concerns. In philosophy of mind, the term "world" is commonly used in contrast to the term "mind" as that which is represented by the mind. This is sometimes expressed by stating that there is a gap between mind and world and that this gap needs to be overcome for representation to be successful. One problem in philosophy of mind is to explain how the mind is able to bridge this gap and to enter into genuine mind-world-relations, for example, in the form of perception, knowledge or action. This is necessary for the world to be able to rationally constrain the activity of the mind. According to a realist position, the world is something distinct and independent from the mind. Idealists conceive of the world as partially or fully determined by the mind. Immanuel Kant's transcendental idealism, for example, posits that the spatiotemporal structure of the world is imposed by the mind on reality but lacks independent existence otherwise. A more radical idealist conception of the world can be found in Berkeley's subjective idealism, which holds that the world as a whole, including all everyday objects like tables, cats, trees and ourselves, "consists of nothing but minds and ideas". Different theological positions hold different conceptions of the world based on its relation to God. Classical theism states that God is wholly distinct from the world. But the world depends for its existence on God, both because God created the world and because He maintains or conserves it. This is sometimes understood in analogy to how humans create and conserve ideas in their imagination, with the difference being that the divine mind is vastly more powerful. On such a view, God has absolute, ultimate reality in contrast to the lower ontological status ascribed to the world. God's involvement in the world is often understood along the lines of a personal, benevolent God who looks after and guides His creation. Deists agree with theists that God created the world but deny any subsequent, personal involvement in it. Pantheists reject the separation between God and world. Instead, they claim that the two are identical. This means that there is nothing to the world that does not belong to God and that there is nothing to God beyond what is found in the world. Panentheism constitutes a middle ground between theism and pantheism. Against theism, it holds that God and the world are interrelated and depend on each other. Against pantheism, it holds that there is no outright identity between the two. History of philosophy In philosophy, the term world has several possible meanings. In some contexts, it refers to everything that makes up reality or the physical universe. In others, it can mean have a specific ontological sense (see world disclosure). While clarifying the concept of world has arguably always been among the basic tasks of Western philosophy, this theme appears to have been raised explicitly only at the start of the twentieth century, Plato is well known for his theory of forms, which posits the existence of two different worlds: the sensible world and the intelligible world. The sensible world is the world we live in, filled with changing physical things we can see, touch and interact with. The intelligible world is the world of invisible, eternal, changeless forms like goodness, beauty, unity and sameness. Plato ascribes a lower ontological status to the sensible world, which only imitates the world of forms. This is due to the fact that physical things exist only to the extent that they participate in the forms that characterize them, while the forms themselves have an independent manner of existence. In this sense, the sensible world is a mere replication of the perfect exemplars found in the world of forms: it never lives up to the original. In the allegory of the cave, Plato compares the physical things we are familiar with to mere shadows of the real things. But not knowing the difference, the prisoners in the cave mistake the shadows for the real things. Two definitions that were both put forward in the 1920s, however, suggest the range of available opinion. "The world is everything that is the case", wrote Ludwig Wittgenstein in his influential Tractatus Logico-Philosophicus, first published in 1921. Martin Heidegger, meanwhile, argued that "the surrounding world is different for each of us, and notwithstanding that we move about in a common world". "World" is one of the key terms in Eugen Fink's philosophy. He thinks that there is a misguided tendency in western philosophy to understand the world as one enormously big thing containing all the small everyday things we are familiar with. He sees this view as a form of forgetfulness of the world and tries to oppose it by what he calls the "cosmological difference": the difference between the world and the inner-worldly things it contains. On his view, the world is the totality of the inner-worldly things that transcends them. It is itself groundless but it provides a ground for things. It therefore cannot be identified with a mere container. Instead, the world gives appearance to inner-worldly things, it provides them with a place, a beginning and an end. One difficulty in investigating the world is that we never encounter it since it is not just one more thing that appears to us. This is why Fink uses the notion of play or playing to elucidate the nature of the world. He sees play as a symbol of the world that is both part of it and that represents it. Play usually comes with a form of imaginary play-world involving various things relevant to the play. But just like the play is more than the imaginary realities appearing in it so the world is more than the actual things appearing in it. The concept of worlds plays a central role in Nelson Goodman's late philosophy. He argues that we need to posit different worlds in order to account for the fact that there are different incompatible truths found in reality. Two truths are incompatible if they ascribe incompatible properties to the same thing. This happens, for example, when we assert both that the earth moves and that the earth is at rest. These incompatible truths correspond to two different ways of describing the world: heliocentrism and geocentrism. Goodman terms such descriptions "world versions". He holds a correspondence theory of truth: a world version is true if it corresponds to a world. Incompatible true world versions correspond to different worlds. It is common for theories of modality to posit the existence of a plurality of possible worlds. But Goodman's theory is different since it posits a plurality not of possible but of actual worlds. Such a position is in danger of involving a contradiction: there cannot be a plurality of actual worlds if worlds are defined as maximally inclusive wholes. This danger may be avoided by interpreting Goodman's world-concept not as maximally inclusive wholes in the absolute sense but in relation to its corresponding world-version: a world contains all and only the entities that its world-version describes. Religion Mythological cosmologies depict the world as centered on an axis mundi and delimited by a boundary such as a world ocean, a world serpent or similar. Hinduism constitutes a family of religious-philosophical views. These views present perspectives on the nature and role of the world. Samkhya philosophy, for example, is a metaphysical dualism that understands reality as comprising 2 parts: purusha and prakriti. The term "purusha" stands for the individual conscious self that each of "us" possesses. Prakriti, on the other hand, is the 1 world inhabited by all these selves. Samkhya understands this world as a world of matter governed by the law of cause and effect. The term "matter" is understood in a sense in this tradition including physical and mental aspects. This is reflected in the doctrine of tattvas, according to which prakriti is made up of 23 principles or elements of reality. These principles include physical elements, like water or earth, and mental aspects, like intelligence or sense-impressions. The relation between purusha and prakriti is conceived as 1 of observation: purusha is the conscious self aware of the world of prakriti and does not causally interact with it. A conception of the world is present in Advaita Vedanta, the monist school among the Vedanta schools. Unlike the realist position defended in Samkhya philosophy, Advaita Vedanta sees the world of multiplicity as an illusion, referred to as Maya. This illusion includes impression of existing as separate experiencing selfs called Jivas. Instead, Advaita Vedanta teaches that on the most fundamental level of reality, referred to as Brahman, there exists no plurality or difference. All there is is 1 all-encompassing self: Atman. Ignorance is seen as the source of this illusion, which results in bondage to the world of mere appearances. Liberation is possible in the course of overcoming this illusion by acquiring the knowledge of Brahman, according to Advaita Vedanta. Contemptus mundi is the name given to the belief that the world, in all its vanity, is nothing more than a futile attempt to hide from God by stifling our desire for the good and the holy. This view has been characterised as a "pastoral of fear" by historian Jean Delumeau. "The world, the flesh, and the devil" is a traditional division of the sources of temptation. Orbis Catholicus is a Latin phrase meaning "Catholic world", per the expression Urbi et Orbi, and refers to that area of Christendom under papal supremacy. In Islam, the term "dunya" is used for the world. Its meaning is derived from the root word "dana", a term for "near". It is associated with the temporal, sensory world and earthly concerns, i.e. with this world in contrast to the spiritual world. Religious teachings warn of a tendency to seek happiness in this world and advise a more ascetic lifestyle concerned with the afterlife. Other strands in Islam recommend a balanced approach. In Mandaean cosmology, the world or earthly realm is known as Tibil. It is separated from the World of Light (alma d-nhūra) above and the World of Darkness (alma d-hšuka) below by aether (ayar). Related terms and problems A worldview is a comprehensive representation of the world and our place in it. As a representation, it is a subjective perspective of the world and thereby different from the world it represents. All higher animals need to represent their environment in some way in order to navigate it. But it has been argued that only humans possess a representation encompassing enough to merit the term "worldview". Philosophers of worldviews commonly hold that the understanding of any object depends on a worldview constituting the background on which this understanding can take place. This may affect not just our intellectual understanding of the object in question but the experience of it in general. It is therefore impossible to assess one's worldview from a neutral perspective since this assessment already presupposes the worldview as its background. Some hold that each worldview is based on a single hypothesis that promises to solve all the problems of our existence we may encounter. On this interpretation, the term is closely associated to the worldviews given by different religions. Worldviews offer orientation not just in theoretical matters but also in practical matters. For this reason, they usually include answers to the question of the meaning of life and other evaluative components about what matters and how we should act. A worldview can be unique to one individual but worldviews are usually shared by many people within a certain culture or religion. The idea that there exist many different worlds is found in various fields. For example, theories of modality talk about a plurality of possible worlds and the many-worlds interpretation of quantum mechanics carries this reference even in its name. Talk of different worlds is also common in everyday language, for example, with reference to the world of music, the world of business, the world of football, the world of experience or the Asian world. But at the same time, worlds are usually defined as all-inclusive totalities. This seems to contradict the very idea of a plurality of worlds since if a world is total and all-inclusive then it cannot have anything outside itself. Understood this way, a world can neither have other worlds besides itself or be part of something bigger. One way to resolve this paradox while holding onto the notion of a plurality of worlds is to restrict the sense in which worlds are totalities. On this view, worlds are not totalities in an absolute sense. This might be even understood in the sense that, strictly speaking, there are no worlds at all. Another approach understands worlds in a schematic sense: as context-dependent expressions that stand for the current domain of discourse. So in the expression "Around the World in Eighty Days", the term "world" refers to the earth while in the colonial expression "the New World" it refers to the landmass of North and South America. Cosmogony is the field that studies the origin or creation of the world. This includes both scientific cosmogony and creation myths found in various religions. The dominant theory in scientific cosmogony is the Big Bang theory, according to which both space, time and matter have their origin in one initial singularity occurring about 13.8 billion years ago. This singularity was followed by an expansion that allowed the universe to sufficiently cool down for the formation of subatomic particles and later atoms. These initial elements formed giant clouds, which would then coalesce into stars and galaxies. Non-scientific creation myths are found in many cultures and are often enacted in rituals expressing their symbolic meaning. They can be categorized concerning their contents. Types often found include creation from nothing, from chaos or from a cosmic egg. Eschatology refers to the science or doctrine of the last things or of the end of the world. It is traditionally associated with religion, specifically with the Abrahamic religions. In this form, it may include teachings both of the end of each individual human life and of the end of the world as a whole. But it has been applied to other fields as well, for example, in the form of physical eschatology, which includes scientifically based speculations about the far future of the universe. According to some models, there will be a Big Crunch in which the whole universe collapses back into a singularity, possibly resulting in a second Big Bang afterward. But current astronomical evidence seems to suggest that our universe will continue to expand indefinitely. World history studies the world from a historical perspective. Unlike other approaches to history, it employs a global viewpoint. It deals less with individual nations and civilizations, which it usually approaches at a high level of abstraction. Instead, it concentrates on wider regions and zones of interaction, often interested in how people, goods and ideas move from one region to another. It includes comparisons of different societies and civilizations as well as considering wide-ranging developments with a long-term global impact like the process of industrialization. Contemporary world history is dominated by three main research paradigms determining the periodization into different epochs. One is based on productive relations between humans and nature. The two most important changes in history in this respect were the introduction of agriculture and husbandry concerning the production of food, which started around 10,000 to 8,000 BCE and is sometimes termed the Neolithic Revolution, and the Industrial Revolution, which started around 1760 CE and involved the transition from manual to industrial manufacturing. Another paradigm, focusing on culture and religion instead, is based on Karl Jaspers' theories about the Axial Age, a time in which various new forms of religious and philosophical thoughts appeared in several separate parts of the world around the time between 800 and 200 BCE. A third periodization is based on the relations between civilizations and societies. According to this paradigm, history can be divided into three periods in relation to the dominant region in the world: Middle Eastern dominance before 500 BCE, Eurasian cultural balance until 1500 CE and Western dominance since 1500 CE. Big History employs an even wider framework than world history by putting human history into the context of the history of the universe as a whole. It starts with the Big Bang and traces the formation of galaxies, the Solar System, the Earth, its geological eras, the evolution of life and humans until the present day. World politics, also referred to as global politics or international relations, is the discipline of political science studying issues of interest to the world that transcend nations and continents. It aims to explain complex patterns found in the social world that are often related to the pursuit of power, order and justice, usually in the context of globalization. It focuses not just on the relations between nation-states but also considers other transnational actors, like multinational corporations, terrorist groups, or non-governmental organizations. For example, it tries to explain events such as the September 11 attacks, the 2003 invasion of Iraq or the 2008 financial crisis. Various theories have been proposed in order to deal with the complexity involved in formulating such explanations. These theories are sometimes divided into realism, liberalism and constructivism. Realists see nation-states as the main actors in world politics. They constitute an anarchical international system without any overarching power to control their behavior. They are seen as sovereign agents that, determined by human nature, act according to their national self-interest. Military force may play an important role in the ensuing struggle for power between states, but diplomacy and cooperation are also key mechanisms for nations to achieve their goals. Liberalists acknowledge the importance of states but they also emphasize the role of transnational actors, like the United Nations or the World Trade Organization. They see humans as perfectible and stress the role of democracy in this process. The emergent order in world politics, on this perspective, is more complex than a mere balance of power since more different agents and interests are involved in its production. Constructivism ascribes more importance to the agency of individual humans than realism and liberalism. It understands the social world as a construction of the people living in it. This leads to an emphasis on the possibility of change. If the international system is an anarchy of nation-states, as the realists hold, then this is only so because we made it this way and may change since this is not prefigured by human nature, according to the constructivists. See also References External links Africa Antarctica Asia Australia Europe North America South America Afro-Eurasia Americas Eurasia Oceania |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Windows] | [TOKENS: 6944] |
Contents Microsoft Windows Windows is a proprietary graphical operating system developed and marketed by Microsoft. Windows is grouped into families that cater to particular sectors of the computing industry – Windows for personal computers, Windows Server for servers, and Windows IoT for embedded systems. Windows itself is further grouped into editions that cater to different users – Home for home users, Professional for advanced users, Education for schools, and Enterprise for corporations. Windows is sold both as a consumer retail product and to computer manufacturers, who bundle and distribute it with their systems. The first version of Windows, Windows 1.0, was released on November 20, 1985 as a graphical operating system shell for MS-DOS in response to growing interest in graphical user interfaces (GUIs). The name Windows is a reference to the windowing system in GUIs. The 1990 release of Windows 3.0 catapulted its market success and led to the launch of various other product families, including the (now-defunct) Windows Mobile, Windows Phone, and Windows CE/Embedded Compact. Windows is the most popular desktop operating system in the world, with a 80% market share as of February 2026[update], and the second-most popular operating system overall, behind Android. As of August 2025, Windows 11 is the most used desktop version of Windows, with a market share of 53%. Product line All members of the Windows product family are, as of 2026[update], based on Windows NT. The first version of Windows in that product line, Windows NT 3.1, was intended for server computing and corporate workstations. It now consists of four sub-families that tend to be released almost simultaneously and share the same kernel. These top-level Windows families are no longer actively developed: History The term Windows collectively describes any or all of several generations of Microsoft operating system products. These products are generally categorized as follows: The history of Windows dates back to 1981 when Microsoft started work on a program called "Interface Manager". The name "Windows" comes from the fact that the system was one of the first to use graphical boxes to represent programs; in the industry, at the time, these were called "windows" and the underlying software was called "windowing software." It was announced in November 1983 (after the Apple Lisa, but before the Macintosh) under the name "Windows", but Windows 1.0 was not released until November 1985. Windows 1.0 was to compete with Apple's operating system, but achieved little popularity. Windows 1.0 is not a complete operating system; rather, it extends MS-DOS. The shell of Windows 1.0 is a program known as the MS-DOS Executive. Components included Calculator, Calendar, Cardfile, Clipboard Viewer, Clock, Control Panel, Notepad, Paint, Reversi, Terminal and Write. Windows 1.0 does not allow overlapping windows. Instead, all windows are tiled. Only modal dialog boxes may appear over other windows. Microsoft sold as included Windows Development libraries with the C development environment, which included numerous windows samples. Windows 2.0 was released in December 1987, and was more popular than its predecessor. It features several improvements to the user interface and memory management. Windows 2.03 changed the OS from tiled windows to overlapping windows. The result of this change led to Apple Computer filing a suit against Microsoft alleging infringement on Apple's copyrights (eventually settled in court in Microsoft's favor in 1993). Windows 2.0 also introduced more sophisticated keyboard shortcuts and could make use of expanded memory. Windows 2.1 was released in two different versions: Windows/286 and Windows/386. Windows/386 uses the virtual 8086 mode of the Intel 80386 to multitask several DOS programs and the paged memory model to emulate expanded memory using available extended memory. Windows/286, in spite of its name, runs on both Intel 8086 and Intel 80286 processors. It runs in real mode but can make use of the high memory area. In addition to full Windows packages, there were runtime-only versions that shipped with early Windows software from third parties and made it possible to run their Windows software on MS-DOS and without the full Windows feature set. The early versions of Windows are often thought of as graphical shells, mostly because they ran on top of MS-DOS and used it for file system services. However, even the earliest Windows versions already assumed many typical operating system functions; notably, having their own executable file format and providing their own device drivers (timer, graphics, printer, mouse, keyboard and sound). Unlike MS-DOS, Windows allowed users to execute multiple graphical applications at the same time, through cooperative multitasking. Windows implemented an elaborate, segment-based, software virtual memory scheme, which allowed it to run applications larger than available memory: code segments and resources were swapped in and thrown away when memory became scarce; data segments moved in memory when a given application had relinquished processor control. Windows 3.0, released in 1990, improved the design, mostly because of virtual memory and loadable virtual device drivers (VxDs) that allow Windows to share arbitrary devices between multi-tasked DOS applications.[citation needed] Windows 3.0 applications can run in protected mode, which gives them access to several megabytes of memory without the obligation to participate in the software virtual memory scheme. They run inside the same address space, where the segmented memory provides a degree of protection. Windows 3.0 also featured improvements to the user interface. Microsoft rewrote critical operations from C into assembly. Windows 3.0 was the first version of Windows to achieve broad commercial success, selling 2 million copies in the first six months. Windows 3.1, made generally available on March 1, 1992, featured a facelift. In October 1992, Windows for Workgroups, a special version with integrated peer-to-peer networking features, was released. It was sold along with Windows 3.1. Support for Windows 3.1 ended on December 31, 2001. Windows 3.2, released in 1994, is an updated version of the Chinese version of Windows 3.1. The update was limited to this language version, as it fixed only issues related to the complex writing system of the Chinese language. Windows 3.2 was generally sold by computer manufacturers with a ten-disk version of MS-DOS that also had Simplified Chinese characters in basic output and some translated utilities.[citation needed] The next major consumer-oriented release of Windows, Windows 95, was released on August 24, 1995. While still remaining MS-DOS-based, Windows 95 introduced support for native 32-bit applications, plug and play hardware, preemptive multitasking, long file names of up to 255 characters, and provided increased stability over its predecessors. Windows 95 also introduced a redesigned, object oriented user interface, replacing the previous Program Manager with the Start menu, taskbar, and Windows Explorer shell. Windows 95 was a major commercial success for Microsoft; Ina Fried of CNET remarked that "by the time Windows 95 was finally ushered off the market in 2001, it had become a fixture on computer desktops around the world." Microsoft published four OEM Service Releases (OSR) of Windows 95, each of which was roughly equivalent to a service pack. The first OSR of Windows 95 was also the first version of Windows to be bundled with Microsoft's web browser, Internet Explorer. Mainstream support for Windows 95 ended on December 31, 2000, and extended support for Windows 95 ended on December 31, 2001. Windows 95 was followed up with the release of Windows 98 on June 25, 1998, which introduced the Windows Driver Model, support for USB composite devices, support for ACPI, hibernation, and support for multi-monitor configurations. Windows 98 also included integration with Internet Explorer 4 through Active Desktop and other aspects of the Windows Desktop Update (a series of enhancements to the Explorer shell which was also made available for Windows 95). In June 1999, Microsoft released Windows 98 Second Edition, an updated version of Windows 98. Windows 98 SE added Internet Explorer 5.0 and Windows Media Player 6.2 amongst other upgrades. Mainstream support for Windows 98 ended on June 30, 2002, and extended support for Windows 98 ended on July 11, 2006. On September 14, 2000, Microsoft released Windows Me (Millennium Edition), the last DOS-based version of Windows. Windows Me incorporated visual interface enhancements from its Windows NT-based counterpart Windows 2000, had faster boot times than previous versions (which however, required the removal of the ability to access a real mode DOS environment, removing compatibility with some older programs), expanded multimedia functionality (including Windows Media Player 7, Windows Movie Maker, and the Windows Image Acquisition framework for retrieving images from scanners and digital cameras), additional system utilities such as System File Protection and System Restore, and updated home networking tools. However, Windows Me was faced with criticism for its speed and instability, along with hardware compatibility issues and its removal of real mode DOS support. PC World considered Windows Me to be one of the worst operating systems Microsoft had ever released, and the fourth worst tech product of all time. In November 1988, a new development team within Microsoft (which included former Digital Equipment Corporation developers Dave Cutler and Mark Lucovsky) began work on a revamped version of IBM and Microsoft's OS/2 operating system known as "NT OS/2". NT OS/2 was intended to be a secure, multi-user operating system with POSIX compatibility and a modular, portable kernel with preemptive multitasking and support for multiple processor architectures. However, following the successful release of Windows 3.0, the NT development team decided to rework the project to use an extended 32-bit port of the Windows API known as Win32 instead of those of OS/2. Win32 maintained a similar structure to the Windows APIs (allowing existing Windows applications to easily be ported to the platform), but also supported the capabilities of the existing NT kernel. Following its approval by Microsoft's staff, development continued on what was now Windows NT, the first 32-bit version of Windows. However, IBM objected to the changes, and ultimately continued OS/2 development on its own. Windows NT was the first Windows operating system based on a hybrid kernel. The hybrid kernel was designed as a modified microkernel, influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel. The first release of the resulting operating system, Windows NT 3.1 (named to associate it with Windows 3.1) was released in July 1993, with versions for desktop workstations and servers. Windows NT 3.5 was released in September 1994, focusing on performance improvements and support for Novell's NetWare, and was followed up by Windows NT 3.51 in May 1995, which included additional improvements and support for the PowerPC architecture. Windows NT 4.0 was released in June 1996, introducing the redesigned interface of Windows 95 to the NT series. On February 17, 2000, Microsoft released Windows 2000, a successor to NT 4.0. The Windows NT name was dropped at this point in order to put a greater focus on the Windows brand. The next major version of Windows NT, Windows XP, was released to manufacturing (RTM) on August 24, 2001, and to the general public on October 25, 2001. The introduction of Windows XP aimed to unify the consumer-oriented Windows 9x series with the architecture introduced by Windows NT, a change which Microsoft promised would provide better performance over its DOS-based predecessors. Windows XP would also introduce a redesigned user interface (including an updated Start menu and a "task-oriented" Windows Explorer), streamlined multimedia and networking features, Internet Explorer 6, integration with Microsoft's .NET Passport services, a "compatibility mode" to help provide backwards compatibility with software designed for previous versions of Windows, and Remote Assistance functionality. At retail, Windows XP was marketed in two main editions: the "Home" edition was targeted towards consumers, while the "Professional" edition was targeted towards business environments and power users, and included additional security and networking features. Home and Professional were later accompanied by the "Media Center" edition (designed for home theater PCs, with an emphasis on support for DVD playback, TV tuner cards, DVR functionality, and remote controls), and the "Tablet PC" edition (designed for mobile devices meeting its specifications for a tablet computer, with support for stylus pen input and additional pen-enabled applications). Mainstream support for Windows XP ended on April 14, 2009. Extended support ended on April 8, 2014. After Windows 2000, Microsoft also changed its release schedules for server operating systems; the server counterpart of Windows XP, Windows Server 2003, was released in April 2003. It was followed in March 2006, by Windows Server 2003 R2. After a lengthy development process, Windows Vista was released on November 30, 2006, for volume licensing and January 30, 2007, for consumers. It contained a number of new features, from a redesigned shell and user interface to significant technical changes, with a particular focus on security features. It was available in a number of different editions, and has been subject to some criticism, such as drop of performance, longer boot time, criticism of new UAC, and stricter license agreement. Vista's server counterpart, Windows Server 2008 was released in early 2008. On July 22, 2009, Windows 7 and Windows Server 2008 R2 were released to manufacturing (RTM) and released to the public three months later on October 22, 2009. Unlike its predecessor, Windows Vista, which introduced a large number of new features, Windows 7 was intended to be a more focused, incremental upgrade to the Windows line, with the goal of being compatible with applications and hardware with which Windows Vista was already compatible. Windows 7 has multi-touch support, a redesigned Windows shell with an updated taskbar with revealable jump lists that contain shortcuts to files frequently used with specific applications and shortcuts to tasks within the application, a home networking system called HomeGroup, and performance improvements. Windows 8, the successor to Windows 7, was released generally on October 26, 2012. A number of significant changes were made on Windows 8, including the introduction of a user interface based around Microsoft's Metro design language with optimizations for touch-based devices such as tablets and all-in-one PCs. These changes include the Start screen, which uses large tiles that are more convenient for touch interactions and allow for the display of continually updated information, and a new class of apps which are designed primarily for use on touch-based devices. The new Windows version required a minimum resolution of 1024×768 pixels, effectively making it unfit for netbooks with 800×600-pixel screens. Other changes include increased integration with cloud services and other online platforms (such as social networks and Microsoft's own OneDrive (formerly SkyDrive) and Xbox Live services), the Windows Store service for software distribution, and a new variant known as Windows RT for use on devices that utilize the ARM architecture, and a new keyboard shortcut for screenshots. An update to Windows 8, called Windows 8.1, was released on October 17, 2013, and includes features such as new live tile sizes, deeper OneDrive integration, and many other revisions. Windows 8 and Windows 8.1 have been subject to some criticism, such as the removal of the Start menu. On September 30, 2014, Microsoft announced Windows 10 as the successor to Windows 8.1. It was released on July 29, 2015, and addresses shortcomings in the user interface first introduced with Windows 8. Changes on PC include the return of the Start Menu, a virtual desktop system, and the ability to run Windows Store apps within windows on the desktop rather than in full-screen mode. Windows 10 is said to be available to update from qualified Windows 7 with SP1, Windows 8.1 and Windows Phone 8.1 devices from the Get Windows 10 Application (for Windows 7, Windows 8.1) or Windows Update (Windows 7). In February 2017, Microsoft announced the migration of its Windows source code repository from Perforce to Git. This migration involved 3.5 million separate files in a 300-gigabyte repository. By May 2017, 90 percent of its engineering team was using Git, in about 8500 commits and 1760 Windows builds per day. In June 2021, shortly before Microsoft's announcement of Windows 11, Microsoft updated their lifecycle policy pages for Windows 10, revealing that support for their last release of Windows 10 will end on October 14, 2025. On April 27, 2023, Microsoft announced that version 22H2 would be the last of Windows 10. On June 24, 2021, Windows 11 was announced as the successor to Windows 10 during a livestream. The new operating system was designed to be more user-friendly and understandable. It was released on October 5, 2021. As of May 2022,[update] Windows 11 is a free upgrade to Windows 10 users who meet the system requirements. In July 2021, Microsoft announced it will start selling subscriptions to virtualized Windows desktops as part of a new Windows 365 service in the following month. The new service will allow for cross-platform usage, aiming to make the operating system available for both Apple and Android users. It is a separate service and offers several variations including Windows 365 Frontline, Windows 365 Boot, and the Windows 365 app. The subscription service will be accessible through any operating system with a web browser. The new service is an attempt at capitalizing on the growing trend, fostered during the COVID-19 pandemic, for businesses to adopt a hybrid remote work environment, in which "employees split their time between the office and home". As the service will be accessible through web browsers, Microsoft will be able to bypass the need to publish the service through Google Play or the Apple App Store. Microsoft announced Windows 365 availability to business and enterprise customers on August 2, 2021. Multilingual support has been built into Windows since Windows 3.0. The language for both the keyboard and the interface can be changed through the Region and Language Control Panel. Components for all supported input languages, such as Input Method Editors, are automatically installed during Windows installation (in Windows XP and earlier, files for East Asian languages, such as Chinese, and files for right-to-left scripts, such as Arabic, may need to be installed separately, also from the said Control Panel). Third-party IMEs may also be installed if a user feels that the provided one is insufficient for their needs. Since Windows 2000, English editions of Windows NT have East Asian IMEs (such as Microsoft Pinyin IME and Microsoft Japanese IME) bundled, but files for East Asian languages may be manually installed on Control Panel. Interface languages for the operating system are free for download, but some languages are limited to certain editions of Windows. Language Interface Packs (LIPs) are redistributable and may be downloaded from Microsoft's Download Center and installed for any edition of Windows (XP or later) – they translate most, but not all, of the Windows interface, and require a certain base language (the language which Windows originally shipped with). This is used for most languages in emerging markets. Full Language Packs, which translate the complete operating system, are only available for specific editions of Windows (Ultimate and Enterprise editions of Windows Vista and 7, and all editions of Windows 8, 8.1 and RT except Single Language). They do not require a specific base language and are commonly used for more popular languages such as French or Chinese. These languages cannot be downloaded through the Download Center, but are available as optional updates through the Windows Update service (except Windows 8). The interface language of installed applications is not affected by changes in the Windows interface language. The availability of languages depends on the application developers themselves. Windows 8 and Windows Server 2012 introduce a new Language Control Panel where both the interface and input languages can be simultaneously changed, and language packs, regardless of type, can be downloaded from a central location. The PC Settings app in Windows 8.1 and Windows Server 2012 R2 also includes a counterpart settings page for this. Changing the interface language also changes the language of preinstalled Windows Store apps (such as Mail, Maps and News) and certain other Microsoft-developed apps (such as Remote Desktop). The above limitations for language packs are however still in effect, except that full language packs can be installed for any edition except Single Language, which caters to emerging markets. Windows NT included support for several platforms before the x86-based personal computer became dominant in the professional world. Windows NT 4.0 and its predecessors supported PowerPC, DEC Alpha and MIPS R4000 (although some of the platforms implement 64-bit computing, the OS treated them as 32-bit). Windows 2000 dropped support for all platforms, except the third generation x86 (known as IA-32) or newer in 32-bit mode. The client line of the Windows NT family still ran on IA-32 up to Windows 10 (the server line of the Windows NT family still ran on IA-32 up to Windows Server 2008). With the introduction of the Intel Itanium architecture (IA-64), Microsoft released new versions of Windows to support it. Itanium versions of Windows XP and Windows Server 2003 were released at the same time as their mainstream x86 counterparts. Windows XP 64-Bit Edition (Version 2003), released in 2003, is the last Windows client operating system to support Itanium. Windows Server line continues to support this platform until Windows Server 2012; Windows Server 2008 R2 is the last Windows operating system to support Itanium architecture. On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003 x64 editions to support x86-64 (or simply x64), the 64-bit version of x86 architecture. Windows Vista was the first client version of Windows NT to be released simultaneously in IA-32 and x64 editions. As of 2024, x64 is still supported. An edition of Windows 8 known as Windows RT was specifically created for computers with ARM architecture, and while ARM is still used for Windows smartphones with Windows 10, tablets with Windows RT will not be updated. Starting from Windows 10 Fall Creators Update (version 1709) and later includes support for ARM-based PCs. Windows CE (officially known as Windows Embedded Compact), is an edition of Windows that runs on minimalistic computers, like satellite navigation systems and some mobile phones. Windows Embedded Compact is based on its own dedicated kernel, dubbed Windows CE kernel. Microsoft licenses Windows CE to OEMs and device makers. The OEMs and device makers can modify and create their own user interfaces and experiences, while Windows CE provides the technical foundation to do so. Windows CE was used in the Dreamcast along with Sega's own proprietary OS for the console. Windows CE was the core from which Windows Mobile was derived. Its successor, Windows Phone 7, was based on components from both Windows CE 6.0 R3 and Windows CE 7.0. Windows Phone 8 however, is based on the same NT-kernel as Windows 8. Windows Embedded Compact is not to be confused with Windows XP Embedded or Windows NT 4.0 Embedded, modular editions of Windows based on Windows NT kernel. Xbox OS is an unofficial name given to the version of Windows that runs on Xbox consoles. From Xbox One onwards it is an implementation with an emphasis on virtualization (using Hyper-V) as it is three operating systems running at once, consisting of the core operating system, a second implemented for games and a more Windows-like environment for applications. Microsoft updates Xbox One's OS every month, and these updates can be downloaded from the Xbox Live service to the Xbox and subsequently installed, or by using offline recovery images downloaded via a PC. It was originally based on NT 6.2 (Windows 8) kernel, and the latest version runs on an NT 10.0 base. This system is sometimes referred to as "Windows 10 on Xbox One". Xbox One and Xbox Series operating systems also allow limited (due to licensing restrictions and testing resources) backward compatibility with previous generation hardware, and the Xbox 360's system is backwards compatible with the original Xbox. Version control system Up to and including every version before Windows 2000, Microsoft used an in-house version control system named Source Library Manager (SLM). Shortly after Windows 2000 was released, Microsoft switched to a fork of Perforce named Source Depot. This system was used up until 2017 once the system could not keep up with the size of Windows.[citation needed] Microsoft had begun to integrate Git into Team Foundation Server in 2013, but Windows (and Office) continued to rely on Source Depot. The Windows code was divided among 65 different repositories with a kind of virtualization layer to produce unified view of all of the code.[citation needed] In 2017 Microsoft announced that it would start using Git, an open source version control system created by Linus Torvalds, and in May 2017 they reported that the migration into a new Git repository was complete. Each Git repository contains a complete history of all the files, which tends to be very large for Windows. Microsoft has been working on a new project called the Virtual File System for Git (VFSForGit) to address these challenges. In 2021 the VFS for Git was superseded by Scalar. Timeline of releases Usage share and device sales Version market share As a percentage of desktop and laptop systems using Microsoft Windows, according to StatCounter data as of February 2026: Use of Windows 10 has exceeded Windows 7 globally since early 2018. For desktop and laptop computers, according to Net Applications and StatCounter (which track the use of operating systems in devices that are active on the Web), Windows was the most used operating-system family in August 2021, with around 91% usage share according to Net Applications and around 76% usage share according to StatCounter. Including personal computers of all kinds (e.g., desktops, laptops, mobile devices, and game consoles), Windows OSes accounted for 32.67% of usage share in August 2021, compared to Android (highest, at 46.03%), iOS's 13.76%, iPadOS's 2.81%, and macOS's 2.51%, according to Net Applications and 30.73% of usage share in August 2021, compared to Android (highest, at 42.56%), iOS/iPadOS's 16.53%, and macOS's 6.51%, according to StatCounter. Those statistics do not include servers (including cloud computing, where Linux has significantly more market share than Windows) as Net Applications and StatCounter use web browsing as a proxy for all use. Security Early versions of Windows were designed at a time when malware and networking were less common, and had few built-in security features; they did not provide access privileges to allow a user to prevent other users from accessing their files, and they did not provide memory protection to prevent one process from reading or writing another process's address space or to prevent a process from code or data used by privileged-mode code. While the Windows 9x series offered the option of having profiles for multiple users with separate profiles and home folders, it had no concept of access privileges, allowing any user to edit others' files. In addition, while it ran separate 32-bit applications in separate address spaces, protecting an application's code and data from being read or written by another application, it did not protect the first megabyte of memory from userland applications for compatibility reasons. This area of memory contains code critical to the functioning of the operating system, and by writing into this area of memory an application can crash or freeze the operating system. This was a source of instability as faulty applications could accidentally write into this region, potentially corrupting important operating system memory, which usually resulted in some form of system error and halt. Windows NT was far more secure, implementing access privileges and full memory protection, and, while 32-bit programs meeting the DoD's C2 security rating, yet these advantages were nullified[improper synthesis?] by the fact that, prior to Windows Vista, the default user account created during the setup process was an administrator account; the user, and any program the user launched, had full access to the machine. Though Windows XP did offer an option of turning administrator accounts into limited accounts, the majority of home users did not do so, partially due to the number of programs which required administrator rights to function properly. As a result, most home users still ran as administrator all the time. These architectural flaws, combined with Windows's very high popularity, made Windows a frequent target of computer worm and virus writers. Furthermore, although Windows NT and its successors are designed for security (including on a network) and multi-user PCs, they were not initially designed with Internet security in mind as much, since, when it was first developed in the early 1990s, Internet use was less prevalent. In a 2002 strategy memo entitled "Trustworthy computing" sent to every Microsoft employee, Bill Gates declared that security should become Microsoft's highest priority. Windows Vista introduced a privilege elevation system called User Account Control. When logging in as a standard user, a logon session is created and a token containing only the most basic privileges is assigned. In this way, the new logon session is incapable of making changes that would affect the entire system. When logging in as a user in the Administrators group, two separate tokens are assigned. The first token contains all privileges typically awarded to an administrator, and the second is a restricted token similar to what a standard user would receive. User applications, including the Windows shell, are then started with the restricted token, resulting in a reduced privilege environment even under an Administrator account. When an application requests higher privileges or "Run as administrator" is clicked, UAC will prompt for confirmation and, if consent is given (including administrator credentials if the account requesting the elevation is not a member of the administrators group), start the process using the unrestricted token. Leaked documents from 2013 to 2016 codenamed Vault 7 detail the capabilities of the CIA to perform electronic surveillance and cyber warfare, such as the ability to compromise operating systems such as Windows. In August 2019, computer experts reported that the BlueKeep security vulnerability, CVE-2019-0708, that potentially affects older unpatched Windows versions via the program's Remote Desktop Protocol, allowing for the possibility of remote code execution, may include related flaws, collectively named DejaBlue, affecting newer Windows versions (i.e., Windows 7 and all recent versions) as well. In addition, experts reported a Microsoft security vulnerability, CVE-2019-1162, based on legacy code involving Microsoft CTF and ctfmon (ctfmon.exe), that affects all Windows versions from Windows XP to the then most recent Windows 10 versions; a patch to correct the flaw is available. Microsoft releases security patches through its Windows Update service approximately once a month (usually the second Tuesday of the month), although critical updates are made available at shorter intervals when necessary. Versions subsequent to Windows 2000 SP3 and Windows XP implemented automatic download and installation of updates, substantially increasing the number of users installing security updates. Windows integrates the Windows Defender antivirus, which is seen as one of the best available. Windows also implements Secure Boot, Control Flow Guard, ransomware protection, BitLocker disk encryption, a firewall, and Windows SmartScreen. In July 2024, Microsoft signalled an intention to limit kernel access and improve overall security, following a highly publicised CrowdStrike update that caused 8.5 million Windows PCs to crash. Part of that initiative is to rewrite parts of Windows in Rust, a memory-safe language. All Windows versions from Windows NT 3 have been based on a file system permission system referred to as AGDLP (Accounts, Global, Domain Local, Permissions) in which file permissions are applied to the file/folder in the form of a 'local group' which then has other 'global groups' as members. These global groups then hold other groups or users depending on different Windows versions used. This system varies from other vendor products such as Linux and NetWare due to the 'static' allocation of permission being applied directly to the file or folder. However using this process of AGLP/AGDLP/AGUDLP allows a small number of static permissions to be applied and allows for easy changes to the account groups without reapplying the file permissions on the files and folders.[citation needed] Alternative implementations Owing to the operating system's popularity, a number of applications have been released that aim to provide compatibility with Windows applications, either as a compatibility layer for another operating system, or as a standalone system that can run software written for Windows out of the box. These include: See also References External links |
======================================== |
[SOURCE: https://www.bbc.com/news/articles/cjd9nllng22o] | [TOKENS: 1918] |
Hollywood studios take aim at 'ultra-realistic' AI video tool8 days agoShareSaveIan YoungsCulture reporterShareSaveGetty ImagesOne clip features footage of Tom Cruise and Brad Pitt fighting in a film sceneMajor US studios have demanded that a powerful new AI video tool, launched by TikTok's Chinese owner ByteDance, must "immediately cease" infringing copyright with its clips based on existing films and shows.Many of the clips are based on real actors, TV shows and films, and the Motion Picture Association told the BBC: "In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorised use of US copyrighted works on a massive scale."The MPA represents the major US studios - Netflix, Paramount Pictures, Prime Video & Amazon MGM Studios, Sony Pictures, Universal Studios, The Walt Disney Studios and Warner Bros Discovery.According to ByteDance, the product has already suspended the ability for people to upload images of real people, and it respects intellectual property rights and copyright protections, and takes any potential infringement seriously.The content referenced was created as part of a limited pre-launch testing phase, it said.Seedance/X/AI-generated imageAn AI-generated clip of Brad Pitt and Tom Cruise fighting has gone viralThe AI tool can quickly make highly realistic clips from a short, simple text prompt, such as a fist fight between Tom Cruise and Brad Pitt, Will Smith battling a red-eyed spaghetti monster or even Friends characters reimagined as otters.The MPA's chairman and CEO, Charles Rivkin, said: "By launching a service that operates without meaningful safeguards against infringement, ByteDance is disregarding well-established copyright law that protects the rights of creators and underpins millions of American jobs."ByteDance should immediately cease its infringing activity."According to ByteDance, steps are being taken to further address risks, and it will implement robust policies, monitoring mechanisms and processes to ensure compliance with local regulations.The clips have been flooding social media, and users have also been posting scenes based on shows and films like The Lord of the Rings, Seinfeld, Avengers and Breaking Bad.ByteDance has billed its new AI tool as delivering "an ultra-realistic immersive experience".It immediately set alarm bells ringing in Hollywood and beyond, with Deadpool writer Rhett Reese warning: "I hate to say it. It's likely over for us."A review by US magazine Forbes noted that Seedance 2.0 "offers a level of creative control that mimics a human director" and "enables users to create high-end outputs without needing complicated production tools".While many users are likely to be delighted to have its powers at their fingertips, Reese, who co-wrote and executive produced the Deadpool films among others, said he was "terrified" by the implications."So many people I love are facing the loss of careers they love. I myself am at risk," he wrote."When I wrote 'It's over,' I didn't mean it to sound cavalier or flippant. I was blown away by the Pitt v Cruise video because it is so professional. That's exactly why I'm scared."My glass half empty view is that Hollywood is about to be revolutionized/decimated. If you truly think the Pitt v Cruise video is unimpressive slop, you've got nothing to worry about. But I'm shook."'Original ideas are the hardest part'Heather Anne Campbell, who has written for Saturday Night Live and Rick & Morty, said the results were akin to fan fiction, and that people would still be required to come up with original ideas."All of these people who have access to the latest AI visualisation engines, like Seedance - they're being given total control to create anything they can imagine - and they're turning out fanfiction," she wrote. "'Breaking bad new scene' or 'goku in live action' etc."Seems like it's challenging to make something new even when you have the infinite budget to make lifelike tv, film, or animation. Almost like the original ideas are the hardest part."Hollywood writers fear losing work to AICreative industries 'incredibly worried' about OpenAI-Disney deal The Black Mirror plot about AI that worries actorsTechnologyTelevisionArtificial intelligenceFilm Hollywood studios take aim at 'ultra-realistic' AI video tool Major US studios have demanded that a powerful new AI video tool, launched by TikTok's Chinese owner ByteDance, must "immediately cease" infringing copyright with its clips based on existing films and shows. Many of the clips are based on real actors, TV shows and films, and the Motion Picture Association told the BBC: "In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorised use of US copyrighted works on a massive scale." The MPA represents the major US studios - Netflix, Paramount Pictures, Prime Video & Amazon MGM Studios, Sony Pictures, Universal Studios, The Walt Disney Studios and Warner Bros Discovery. According to ByteDance, the product has already suspended the ability for people to upload images of real people, and it respects intellectual property rights and copyright protections, and takes any potential infringement seriously. The content referenced was created as part of a limited pre-launch testing phase, it said. The AI tool can quickly make highly realistic clips from a short, simple text prompt, such as a fist fight between Tom Cruise and Brad Pitt, Will Smith battling a red-eyed spaghetti monster or even Friends characters reimagined as otters. The MPA's chairman and CEO, Charles Rivkin, said: "By launching a service that operates without meaningful safeguards against infringement, ByteDance is disregarding well-established copyright law that protects the rights of creators and underpins millions of American jobs. "ByteDance should immediately cease its infringing activity." According to ByteDance, steps are being taken to further address risks, and it will implement robust policies, monitoring mechanisms and processes to ensure compliance with local regulations. The clips have been flooding social media, and users have also been posting scenes based on shows and films like The Lord of the Rings, Seinfeld, Avengers and Breaking Bad. ByteDance has billed its new AI tool as delivering "an ultra-realistic immersive experience". It immediately set alarm bells ringing in Hollywood and beyond, with Deadpool writer Rhett Reese warning: "I hate to say it. It's likely over for us." A review by US magazine Forbes noted that Seedance 2.0 "offers a level of creative control that mimics a human director" and "enables users to create high-end outputs without needing complicated production tools". While many users are likely to be delighted to have its powers at their fingertips, Reese, who co-wrote and executive produced the Deadpool films among others, said he was "terrified" by the implications. "So many people I love are facing the loss of careers they love. I myself am at risk," he wrote. "When I wrote 'It's over,' I didn't mean it to sound cavalier or flippant. I was blown away by the Pitt v Cruise video because it is so professional. That's exactly why I'm scared. "My glass half empty view is that Hollywood is about to be revolutionized/decimated. If you truly think the Pitt v Cruise video is unimpressive slop, you've got nothing to worry about. But I'm shook." 'Original ideas are the hardest part' Heather Anne Campbell, who has written for Saturday Night Live and Rick & Morty, said the results were akin to fan fiction, and that people would still be required to come up with original ideas. "All of these people who have access to the latest AI visualisation engines, like Seedance - they're being given total control to create anything they can imagine - and they're turning out fanfiction," she wrote. "'Breaking bad new scene' or 'goku in live action' etc. "Seems like it's challenging to make something new even when you have the infinite budget to make lifelike tv, film, or animation. Almost like the original ideas are the hardest part." Hollywood writers fear losing work to AI Creative industries 'incredibly worried' about OpenAI-Disney deal The Black Mirror plot about AI that worries actors Tumbler Ridge suspect's ChatGPT account banned before shooting Why fake AI videos of UK urban decline are taking over social media Fixing fashion's erratic sizing problem Child actor 'never thought' he would be cast in drama Tom Page-Turner plays Bill in the BBC new series and spent a number of months filming in Malaysia. 'Breweries using AI could put artists out of work' As two pubs in Newcastle ban AI art, artists discuss the impact it can have on creatives. Film school proud as it hopes for third Bafta win The National Film and Television School is nominated in the British Short Film category. Actor 'so proud' as Belfast film wins Ifta award Nostalgie, based in Belfast, took the Best Short Film prize in the live action category at the Iftas. Woody and Buzz reunite in trailer for Toy Story 5 The old friends come face to face with a new threat - a frog-like tablet device called Lilypad. Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. |
======================================== |
[SOURCE: https://www.fast.ai/posts/2021-11-04-data-disasters.html] | [TOKENS: 2328] |
Avoiding Data Disasters Rachel Thomas November 4, 2021 On this page Things can go disastrously wrong in data science and machine learning projects when we undervalue data work, use data in contexts that it wasn’t gathered for, or ignore the crucial role that humans play in the data science pipeline. A new multi-university centre focused on Information Resilience, funded by the Australian government’s top scientific funding body (ARC), has recently launched. Information Resilience is the capacity to detect and respond to failures and risks across the information chain in which data is sourced, shared, transformed, analysed, and consumed. I’m honored to be a member of the strategy board, and I have been thinking about what information resilience means with respect to data practices. Through a series of case studies and relevant research papers, I will highlight these risks and point towards more resilient practices. Case study: UK covid tracking app Data from a covid-symptom tracking app was used in a research paper to draw wildly inaccurate conclusions about the prevalence of Long Covid, the often debilitating neurological, vascular, and immune disease that can last for months or longer (some patients have been sick for 20 months and counting). The app suggested that only 1.5% of patients still experience symptoms after 3 months, an order of magnitude smaller than estimates of 10-35% being found by other studies. How could this research project have gone so wrong? Well, the app had been designed for a completely different purpose (tracking 1-2 week long respiratory infections), didn’t include the most common Long Covid symptoms (such as neurological dysfunction), had a frustrating user-interface that led many patients to quit using it, and made the erroneous assumption that those who stopped logging must be fully recovered. The results from this faulty research paper were widely shared, including in a BBC article, offering false reassurance than Long Covid prevalence is much rarer than it is. Patients had been voicing their frustrations with the app all along, and if researchers had listened sooner, they could have collected a much higher quality and more accurate data set. This research failure illustrates a few common issues in data projects: - The context of the data was not taken into account. The user-interface, the categories listed, the included features– these were all designed to record data about a short-term mild respiratory infection. However, when it was used for a different purpose (long covid patients suffering for months with vascular and neurological symptoms), it did a poor job, and led to missing and incomplete data. This happens all too often, in which data gathered for one context is used for another - The people most impacted (long covid patients) were ignored. They had the most accurate expertise on what long covid actually entailed, yet were not listened to. Ignoring this expertise led to lower quality data and erroneous research conclusions. Patients have crucial domain expertise, which is distinct from that of doctors, and must be included in medical data science projects. From the start of the pandemic, patients who had suffered from other debilitating post-viral illnesses warned that we should be on the lookout for long-term illness, even in initially “mild” cases. Data is Crucial Collecting data about covid and its long-term effects directly from patients was a good idea, but poorly executed in this case. Due to privacy and surveillance risks, I frequently remind people not to record data that they don’t need. However, the pandemic has been a good reminder of how much data we really do need, and how tough it is when it’s missing. At the start of the pandemic in the United States, we had very little data about what was happening– the government was not tabulating information on cases, testing, or hospitalization. How could we know how to react when we didn’t understand how many cases there were, what death rates were, how transmissible the disease was, and other crucial information? How could we make policy decisions in the absence of a basic understanding of the facts. In early March 2020, two journalists and a data scientist from a medication-discovery platform began pulling covid data together into a spreadsheet to understand the situation in the USA. This launched into a 15-month long project in which 500 volunteers compiled and published data on COVID-19 testing, cases, hospitalizations, and deaths in the USA. During those 15 months, the Covid Tracking Project was the most comprehensive source of covid data in the USA, even more comprehensive than what the CDC had, and it was used by the CDC, numerous government agencies, and both the Trump and Biden Administrations. It was cited in academic studies and in thousands of news articles. A data infrastructure engineer and contributor for the project later recounted, “It quickly became apparent that daily, close contact with the data was necessary to understand what states were reporting. States frequently changed how, what, and where they reported data. Had we set up a fully automated data capture system in March 2020, it would have failed within days.” The project used automation as a way to support and supplement manual work, not to replace it. At numerous points, errors in state reporting mechanisms were caught by eagle-eyed data scientists notifying discrepancies. This vision of using automation to support human work resonates with our interest at fast.ai in “augmentedML”, not “autoML”. I have written previously and gave an AutoML workshop keynote on how too often automation ignores the important role of human input. Rather than try to automate everything (which often fails), we should focus on how humans and machines can best work together to take advantage of their different strengths. Data Work is Undervalued Interviews of 53 AI practitioners across 6 countries on 3 continents found a pattern that is very familiar to many of us (including me) who work in machine learning: “Everyone wants to do the model work, not the data work.” Missing meta-data leads to faulty assumptions. Data collection practices often conflict with the workflows of on-the-ground partners, such as nurses or farmers, who are usually not compensated for this extraneous effort. Too often data work is arduous, invisible, and taken for granted. Undervaluing of data work leads to poor practices and often results in negative, downstream events, including dangerously inaccurate models and months of lost work. Throughout the pandemic, data about covid (both initial cases and long covid) has often been lacking. Many countries have experienced testing shortages, leading to undercounts of how many people have covid. The CDC decision not to track breakthrough cases unless they resulted in hospitalization made it harder to understand prevalence of break-throughs (a particularly concerning decision since break-throughs can still lead to long covid). In September, it was revealed that British Columbia, Canada was not including covid patients in their ICU counts once the patients were no longer infectious, a secretive decision that obscured how full ICUs were. Some studies of Long Covid have failed to include common symptoms, such as neurological ones, making it harder to understand the prevalence or nature. Data has Context Covid is giving us a first-hand view of how data, which we may sometimes want to think of as “objective”, are shaped by countless human decisions and factors. In the example of the symptom tracking app, decisions about which symptoms were included had a significant impact on the prevalence rate calculated. Design decisions that influenced the ease of use impacted how much data was gathered. Lack of understanding of how the app was being used (and why people quit using it) led to erroneous decisions about which cases should be considered “recovered”. These are all examples of the context for data. Here, the data gathered was reasonably appropriate for understanding initial covid infections (a week or two of respiratory symptoms), but not for patients experiencing months of neurological and vascular symptoms. Numbers can not stand alone, we need to understand how they were measured, who was included and excluded, relevant design decisions, under what situations a dataset is appropriate to use vs. not. As another example, consider covid testing counts: Who has access to testing (this involves health inequities, due to race or urban vs. rural), who is encouraged to get tested (at various times, people without symptoms, children, or other groups have been discouraged from doing so), varying accuracies (e.g. PCR tests are less accurate on children, missing almost half of cases that later go on to seroconvert), and making decisions about what counts as a “case” (I know multiple people who had alternating test results: positive, negative, positive, or the reverse– what counts as a positive case?) One proposal for capturing this context is Datasheets for Datasets. Prior to doing her PhD at Stanford in computer vision and then co-leading Google’s AI ethics team, Dr. Timnit Gebru worked at Apple in circuit design and electrical engineering. In electronics, each component (such as a circuit or transistor) comes with a datasheet that lists when and where it was manufactured, under what conditions it is safe to use, and other specifications. Dr. Gebru drew on this background to propose a similar idea for datasets: listing the context of when and how it was created, what data was included/excluded, recommended uses, potential biases and ethical risks, work needed to maintain it, and so on. This is a valuable proposal towards making the context of data more explicit. The People Most Impacted The inaccurate research and incomplete data from the covid tracking app could have been avoided by drawing on the expertise of patients. Higher quality data could have been collected sooner and more thoroughly, if patients were consulted in the app design and in the related research studies. Participatory approaches to machine learning is an exciting and growing area of research. In any domain, the people who would be most impacted by errors or mistakes need to be included as partners in the design of the project. Often, our approaches to addressing fairness or other ethics issues, further centralize the power of system designers and operators. The organizers of an ICML workshop on the topic called for more cooperative, democratic, and participatory approaches instead. We need to think not just about explainability, but about giving people actionable recourse. As Professor Berk Ustun highlights, when someone asks why their loan was denied, usually what they want is not just an explanation but to know what they could change in order to get a loan. We need to design systems with contestability in mind, to include from the start the idea that people should be able to challenge system outputs. We need to include expert panels of perspectives that are often overlooked, depending on the application, this could mean formerly or currently incarcerated people, people who don’t drive, people with very low incomes, disabled people, and many others. The Diverse Voices project from University of Washington Tech Lab provides guidance on how to do this. And it is crucial that this not just be tokenistic participation-washing, but a meaningful, appropriately compensated, and ongoing role in their design and operation. Towards Greater Data Resilience I hope that we can improve data resilience through: - Valuing data work - Documenting context of data - Close contact with the data - Meaningful, ongoing, and compensated involvement of the people impacted And I hope that when our data represents people we can remember the human side. As AI researcher Inioluwa Deborah Raji wrote, “Data are not bricks to be stacked, oil to be drilled, gold to be mined, opportunities to be harvested. Data are humans to be seen, maybe loved, hopefully taken care of.” |
======================================== |
[SOURCE: https://www.bbc.com/news/articles/cy4wnw04e8wo] | [TOKENS: 2391] |
AI coding platform's flaws allow BBC reporter to be hacked 13 February 2026ShareSaveJoe TidyCyber correspondent, BBC World ServiceShareSaveBBCThe hacker was able to hijack a BBC reporter's laptop to upload this wallpaperThe BBC has been shown a significant - and unfixed - cyber-security risk in a popular AI coding platform.Orchids is a so-called "vibe-coding" tool, meaning people without technical skills can use it to build apps and games by typing a text prompt into a chatbot.Such platforms have exploded in popularity in recent months, and are often heralded as an early example of how various professional services could be done quickly and cheaply by AI.But experts say the ease with which Orchids can be hacked demonstrates the risks of allowing AI bots deep access to our computers in exchange for the convenience of allowing them to carry out tasks autonomously.The BBC has repeatedly asked the company for comment but it has not replied.'You are hacked'Orchids claims to have a million users, and says it is used by top companies including Google, Uber, and Amazon. It is rated as the best programme for some elements of vibe coding according to ratings from App Bench and other analysts.Its security flaws were demonstrated to the BBC by cyber-security researcher Etizaz Mohsin.I downloaded the Orchids desktop app to my spare laptop, which I use for experiments, and started a vibe-coding project as a test.Orchids is one of many AI agent platforms that writes code for users who have no experienceI asked Orchids to help me build the code for a computer game based on the BBC News website.Automatically, the AI assistant began compiling code on the screen that, without any experience, I couldn't understand.Exploiting a cyber-security weakness (which we are not disclosing), Mohsin was able to gain access to my project, and view and edit any of the code.He then added a small line of code somewhere in the thousands of lines of letters, numbers and symbols into my project, unbeknown to me.It appears this allowed him to gain access my computer - because shortly afterwards, a notepad file called "Joe is hacked" appeared on the desktop, and the wallpaper was changed to an image of an AI hacker.The implications of the hack on the platform's tens of thousands of projects were obvious.A nefarious hacker could have easily installed a virus on to my machine without me having to do anything.My or my company's private or financial data could have been stolen. An attacker could have accessed my internet history or even spy through the cameras and microphones.Most hacks involve a victim downloading a piece of malicious software or being tricked into handing over login details.This attack was able to be carried out without any involvement from the victim - a zero-click attack, as it's known."The vibe-coding revolution has introduced a fundamental shift in how developers interact with their tools, and this shift has created an entirely new class of security vulnerability that didn't exist before," Mohsin told me. "The whole proposition of having the AI handle things for you comes with big risks."Etizaz Mohsin speaking about cyber-security at the prestigious BlackHat conferenceMohsin, 32, is from Pakistan and now lives in the UK. He has a track record of finding dangerous flaws in software that allow hackers to break in including work on the infamous Pegasus spyware. He said he found the flaw while experimenting with vibe-coding in December 2025 and has spent the weeks since trying to get Orchids to respond on email, LinkedIn, and Discord with around a dozen messages.The Orchids team finally responded to him this week, saying they "possibly missed" his warnings as the team is "overwhelmed with inbound" messages.The San Francisco-based company's LinkedIn page says it was founded in 2025 and has fewer than 10 employees.AI Agent risksMohsin says he has only found the flaws in Orchids, and not yet in other vibe-coding platforms such as Claude Code, Cursor, Windsurf and Lovable.Nonetheless, experts say it should serve as a warning."The main security implications of vibe-coding are that without discipline, documentation, and review, such code often fails under attack," says Kevin Curran, professor of cybersecurity at Ulster University.AI tools that carry out complex tasks for us - known as agentic AI - are increasingly hitting the headlines.One recent example is the viral Clawbot agent also known as Moltbot or Open Claw.The AI bot can run tasks on your own device, such as sending WhatsApp messages or managing your calendar, with little human input.It's estimated that the free AI agent has been downloaded by hundreds of thousands of people and has deep access to people's computers - but that also means many potential security risks and flaws.Karolis Arbaciauskas, head of product at the cyber-security company NordPass, says people should be cautious."While it's exciting and curious to see what an AI agent can do without any security guardrails, this level of access is also extremely insecure," he said.His advice is to run these tools on separate, dedicated machines and use disposable accounts for any experimentation.'Vibe coding' named word of the year by Collins DictionaryAI safety leader says 'world is in peril' and quits to study poetryAI firm says its technology weaponised by hackersSign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.Artificial intelligenceCyber-securityTechnology AI coding platform's flaws allow BBC reporter to be hacked The BBC has been shown a significant - and unfixed - cyber-security risk in a popular AI coding platform. Orchids is a so-called "vibe-coding" tool, meaning people without technical skills can use it to build apps and games by typing a text prompt into a chatbot. Such platforms have exploded in popularity in recent months, and are often heralded as an early example of how various professional services could be done quickly and cheaply by AI. But experts say the ease with which Orchids can be hacked demonstrates the risks of allowing AI bots deep access to our computers in exchange for the convenience of allowing them to carry out tasks autonomously. The BBC has repeatedly asked the company for comment but it has not replied. 'You are hacked' Orchids claims to have a million users, and says it is used by top companies including Google, Uber, and Amazon. It is rated as the best programme for some elements of vibe coding according to ratings from App Bench and other analysts. Its security flaws were demonstrated to the BBC by cyber-security researcher Etizaz Mohsin. I downloaded the Orchids desktop app to my spare laptop, which I use for experiments, and started a vibe-coding project as a test. I asked Orchids to help me build the code for a computer game based on the BBC News website. Automatically, the AI assistant began compiling code on the screen that, without any experience, I couldn't understand. Exploiting a cyber-security weakness (which we are not disclosing), Mohsin was able to gain access to my project, and view and edit any of the code. He then added a small line of code somewhere in the thousands of lines of letters, numbers and symbols into my project, unbeknown to me. It appears this allowed him to gain access my computer - because shortly afterwards, a notepad file called "Joe is hacked" appeared on the desktop, and the wallpaper was changed to an image of an AI hacker. The implications of the hack on the platform's tens of thousands of projects were obvious. A nefarious hacker could have easily installed a virus on to my machine without me having to do anything. My or my company's private or financial data could have been stolen. An attacker could have accessed my internet history or even spy through the cameras and microphones. Most hacks involve a victim downloading a piece of malicious software or being tricked into handing over login details. This attack was able to be carried out without any involvement from the victim - a zero-click attack, as it's known. "The vibe-coding revolution has introduced a fundamental shift in how developers interact with their tools, and this shift has created an entirely new class of security vulnerability that didn't exist before," Mohsin told me. "The whole proposition of having the AI handle things for you comes with big risks." Mohsin, 32, is from Pakistan and now lives in the UK. He has a track record of finding dangerous flaws in software that allow hackers to break in including work on the infamous Pegasus spyware. He said he found the flaw while experimenting with vibe-coding in December 2025 and has spent the weeks since trying to get Orchids to respond on email, LinkedIn, and Discord with around a dozen messages. The Orchids team finally responded to him this week, saying they "possibly missed" his warnings as the team is "overwhelmed with inbound" messages. The San Francisco-based company's LinkedIn page says it was founded in 2025 and has fewer than 10 employees. AI Agent risks Mohsin says he has only found the flaws in Orchids, and not yet in other vibe-coding platforms such as Claude Code, Cursor, Windsurf and Lovable. Nonetheless, experts say it should serve as a warning. "The main security implications of vibe-coding are that without discipline, documentation, and review, such code often fails under attack," says Kevin Curran, professor of cybersecurity at Ulster University. AI tools that carry out complex tasks for us - known as agentic AI - are increasingly hitting the headlines. One recent example is the viral Clawbot agent also known as Moltbot or Open Claw. The AI bot can run tasks on your own device, such as sending WhatsApp messages or managing your calendar, with little human input. It's estimated that the free AI agent has been downloaded by hundreds of thousands of people and has deep access to people's computers - but that also means many potential security risks and flaws. Karolis Arbaciauskas, head of product at the cyber-security company NordPass, says people should be cautious. "While it's exciting and curious to see what an AI agent can do without any security guardrails, this level of access is also extremely insecure," he said. His advice is to run these tools on separate, dedicated machines and use disposable accounts for any experimentation. 'Vibe coding' named word of the year by Collins Dictionary AI safety leader says 'world is in peril' and quits to study poetry AI firm says its technology weaponised by hackers Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here. Tumbler Ridge suspect's ChatGPT account banned before shooting 'Breweries using AI could put artists out of work' Urgent research needed to tackle AI threats, says Google AI boss Why fake AI videos of UK urban decline are taking over social media Deepfakes showing grim taxpayer-funded waterparks have gone viral and drawn some racist responses. Fixing fashion's erratic sizing problem Tech Now meets a startup trying to fix one of the fashion industry's biggest blind spots, inconsistent sizing. The Chinese AI app sending Hollywood into a panic Clips of Deadpool and other film characters have sparked alarm within Hollywood over copyright infringement. Microsoft error sees confidential emails exposed to AI tool Copilot The company says it has addressed the issue and it "did not provide anyone access to information they weren't already authorised to see". Tech firms will have 48 hours to remove abusive images under new law The government is proposing that intimate image abuse should be treated more severely. Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Infiniminer] | [TOKENS: 694] |
Contents Infiniminer Infiniminer is an open-source multiplayer sandbox game developed by Zachtronics, centered on block-based construction and excavation. Players assume the role of miners on a team, aiming to collect the highest amount of money by mining ore while avoiding and fighting the opposing team. According to its creator, Zachary Barth, the game was inspired by Infinifrag, Team Fortress 2, and Motherload by XGen Studios. The game has been cited as an influence on the development of Minecraft. Gameplay Players are placed into teams within a procedurally generated landscape and are equipped with mining tools. The primary objective is to collect the most money by mining ore while avoiding or fighting an opposing team. The team that earns the most money by the end of the round is declared the winner. During development, Zachary Barth, the creator of Infiniminer, observed that players began using its mechanics for building and item collection, deviating from its original competitive design. This behavior led to the emergence of sandbox-style gameplay. Development Zachary Barth had early experience with programming games in his youth but honed his skills more formally while attending Rensselaer Polytechnic Institute in New York. After graduating, he worked as a game programmer at Microsoft, continuing independent development in his spare time. In April 2009, he released Infiniminer, a multiplayer game that initially focused on competitive mining objectives. However, players soon began using its sandbox mechanics to construct structures. Despite the game's growing popularity, Infiniminer generated no revenue. Barth had not obfuscated the game's source code, which led to it being decompiled and spread online. As a result, unauthorized modifications and hacks became common. Due to the lack of monetization and control over the game's code, Barth discontinued development. Legacy In 2009, Swedish indie developer Markus Persson began early development on the project that would eventually become Minecraft, initially titled RubyDung. Infiniminer significantly influenced the project, particularly in its block-based visual style, first-person perspective, and building mechanics. However, Persson aimed to incorporate additional gameplay elements, such as role-playing features, to distinguish his game from its predecessor. Following the rise in popularity of Minecraft, Barth reflected on the game's impact in an interview with Cascade PBS. He stated, "It was never my plan to have what happened, happen. You don't plan things in life." Although initially shocked by Minecraft's success, Barth acknowledged the game's popularity. Barth later found commercial success with the release of SpaceChem, a puzzle game that grossed over US$1 million on a development budget of just US$4,000. The game's financial success enabled him to leave his position at Microsoft and establish his own studio, Zachtronics, which eventually employed four people. By 2012, however, the studio faced financial difficulties and turned to contract work with the education company Amplify to develop educational games. On 15 March 2014, Infiniminer was featured on the British television program Fresh Meat. According to Barth, a Channel 4 production coordinator informed him of the inclusion. He initially assumed the reference was to Minecraft due to its broader popularity, but the episode featured footage and references from Infiniminer. In 2015, Barth released Infinifactory, a puzzle game incorporating elements of both Infiniminer and SpaceChem. References |
======================================== |
[SOURCE: https://www.bbc.com/news/articles/cvgn2k285ypo] | [TOKENS: 6756] |
The tech firms embracing a 72-hour working week9 February 2026ShareSaveTheo LeggettInternational Business CorrespondentShareSaveBBCThe recruitment website is jazzy, awash with pictures of happy young workers, and festooned with upbeat mini-slogans such as "insane speed", "infinite curiosity" and "customer obsession".Read a bit lower, and there are promises of perks galore: competitive compensation, free meals, free gym membership, free health and dental care and so on. But then comes the catch.Each job ad contains a warning: "Please don't join if you're not excited about… working ~70 hrs/week in person with some of the most ambitious people in NYC."The website belongs to Rilla, a New York-based tech business which sells AI-based systems that allow employers to monitor sales representatives when they are out and about, interacting with clients.The company has become something of a poster child for a fast-paced workplace culture known as 996, also sometimes referred to as hustle culture or grindcore.In simple terms, it puts a premium on long working hours, typically 9am to 9pm, six days a week (hence "996").For most of us, that would be gruelling. But according to Will Gao, head of growth at Rilla, its 120 employees simply don't see it that way."We look for people who are like Olympian athletes, with characteristics of, you know, obsession, infinite ambition."It's people who want to do incredible things and have a lot of fun while doing so," he says.Will GaoWill Gao says the company Rilla looks for people "like Olympian athletes"He insists that while the hours are generally long, there's no rigid structure. "If I'm like, 'Holy cow, I have a super idea I'm working on', then I'll just keep working until 2 or 3am, then I'll just roll in the next day at noon or something", he explains.This kind of approach has become extremely popular in the technology sector over the past few years, and for good reason. The development of artificial intelligence (AI) has been taking place at a breakneck pace, and companies around the world are now working flat-out to develop ways in which it can be exploited and monetised.Huge amounts of money are being ploughed into AI ventures, many of them start-ups. But for every ambitious company founder, the ever-present fear is that someone else will get there first. Speed is of the essence – and tech sector workers are under pressure to work harder, and longer, to get results quickly.'Slackers are not my brothers'The 996 culture first came to the fore in China a decade ago. It was embraced by tech companies and start-ups at a time when the country was increasingly focused on transforming itself from the world's workshop for cheap goods into a leader in advanced technologies.It had some powerful advocates. They included Jack Ma, the billionaire founder of the retail behemoth Alibaba.com."I personally think that being able to work 996 is a huge blessing", he wrote in one blog post for employees."It's not just entrepreneurs; most successful or ambitious artists, scientists, athletes, officials, and politicians work 996 or more", he said in another. "It's not because they have extraordinary perseverance, but because they are deeply passionate about their chosen careers", he added.Getty ImagesJack Ma, former chairman of the Alibaba Group, called the 996 trend a "blessing"Another enthusiast was Richard Liu, founder of the retail behemoth JD.com, who at one point railed against what he saw as the country's declining work ethic. "Slackers are not my brothers!", he wrote in a controversial email to staff in 2019.But such an attitude prompted a backlash, including a wave of online complaints that companies were ignoring labour laws and failing to pay overtime, while forcing employees to work excessive hours. By 2021 this chorus of disapproval had become too loud for the authorities to ignore – and prompted a legal crackdown.In China, 996 has not disappeared, but its advocates have generally been a lot quieter. A notable exception was Baidu's one-time head of public relations, Qu Jing, who posted a series of videos on social media in 2024, aggressively defending a hard-working culture. Her brusque dismissal of employees' wellbeing, with the comment "I'm not your mother, I only care about results" provoked outrage. She later apologised, but it ultimately cost Qu her job.Yet the culture still has fans elsewhere. Last year, Narayana Murthy, the founder of Indian software giant Infosys, spoke admiringly about China's use of 996. He remarked in a TV interview that "no individual, no community, no country has ever come up without hard work".The AI gold rushSo why has the US tech industry chosen to embrace the trend? A key factor appears to have been the headlong rush to develop ways of using AI."It's mainly AI companies," explains Adrian Kinnersley, who runs recruitment businesses in Europe and North America. "It's those that have some funding from venture capitalists, that are in a race to develop their products and get them out to market before someone else beats them to it. That's led them into the idea that, if you work longer hours, you win the race."One of those AI start-ups is run by Magnus Müller, a young German-born entrepreneur. He co-founded Browser-Use, a business that is developing tools to help AI applications interact with web browsers. He lives in a "hacker-house", a shared living and workspace, where he and his colleagues continually swap ideas, and believes working long hours is just a fact of life.Magnus MüllerMagnus Müller lives in a "hacker house" - a shared home and workspace where he and his colleagues constantly exchange ideas"I think it's hard, what we're trying to build. I think it's the problems we're trying to solve, giving AI these extra capabilities. It's super hard, and very competitive, and most often the returns come when you just immerse yourself very deeply into a problem… then suddenly fascinating things happen."Browser-Use currently has just seven staff, but it is recruiting more. Müller says he is looking for people with the same kind of mentality as himself. Anyone who wants to work a 40-hour week, he says, is unlikely to fit in."We really look for people who are just addicted, who love what they're doing", he emphasises. "It's like gaming, OK? It's like you're addicted to gaming… for us, it doesn't really feel like work. We just do what we love."Others disagree. Deedy Das is a partner at Menlo Ventures, a venture capital firm which has a near 50-year record of investing in technology businesses. He thinks the most common mistake young entrepreneurs make is insisting their employees work 996-style hours.Deedy DasDeedy Das says working long hours does not mean staff are being productive or working efficiently"I think the thing young founders get wrong is they view hours worked in and of itself as necessary and sufficient to think of themselves as productive. And that's where the fallacy lies", he explains. "Forcing your employees to come in, and hustle, is a downstream artefact of such a mindset."He thinks such an approach can alienate those with families, as well as experienced older workers who "can actually work far less and achieve much more because they know what they're doing". He adds that continual long hours will lead to long-term burnout.However, he concedes that for company founders themselves, with skin in the game and the potential to become very wealthy if their business succeeds, different rules apply."Frankly, I would be shocked if a founder wasn't working 70-80 hours per week. I can personally say… if I'm investing in an early-stage founder, if they're not working 70-80 hours a week, it's probably not a great investment."Academic and author on workplace culture, Tamara Myles, says hustle culture is unsustainable, especially if people feel compelled to be working at all times. But she concedes there are grey areas.More from InDepthThe contradiction at the heart of the trillion-dollar AI raceThe UK car industry is at a tipping point - can it be saved?Asda has lost its mojo and has a big fight to get it back"The nuance here is that a lot of these tech companies that are living this 996 culture are actually not hiding it, they're advertising it. They're selling it as a badge of honour, almost," she says.But that doesn't mean everyone who agrees to work 996 actually wants to, she argues. "You might be staying because the job market is tough right now, or you might be here for a visa, and you depend on it. So there might be power dynamics at play."Health risksYet those who do choose to burn the midnight oil could end up paying a heavy price.Concerns over the health impact of working long hours are certainly not new. In Japan, a country with a long-established hard-work culture, where so-called salarymen have notoriously supported the post-war economy with utter dedication to their employers, there is even a word for it: Karōshi.It means death through overwork, and refers principally to strokes and heart attacks suffered by people working very long hours. Karōjisatsu, meanwhile, refers to people taking or attempting to take their own lives due to workplace stress.Both are recognised in Japanese law, and families are in theory eligible for compensation from a government scheme, although in practice proving a death derives from overwork can be difficult.More broadly, analysis published in 2021 by the World Health Organization (WHO) and the International Labour Organization (ILO) concluded that long working hours - defined as more than 55 hours per week - had led to 745,000 deaths worldwide from stroke and heart disease in 2016.It concluded that working 55 hours or more a week increased the risk of dying of heart disease by 17% compared to working 35-40 hours, and raised the risk of a stroke by 35%.The productivity thresholdThen, there's productivity - broadly speaking the amount that actually gets done for every hour worked. Studies have shown that as hours go up, productivity initially increases - but once a threshold is reached it starts to decline again as physical and mental exhaustion sets in. The 'sweet spot' is widely recognised to be around 40 hours per week.As one recent study put it: "At around 40h per five-day work week, workers seem to be able to maintain productivity fairly well, but when individuals exceed this threshold and engage in longer work hours, their job performance gradually weakens because of increased fatigue and underprivileged health conditions."In other words, once this threshold is reached, the extra output from every hour worked starts to decline. Even so, there will always be a temptation for companies to employ fewer people and get them to work for longer. This is because each extra employee comes at a cost: they have to be recruited, trained if necessary, and paid.Getty ImagesOffice workers in the 1980s in Japan, where a culture of dedication to the employer has its own word, KarōshiBut research suggests this approach can backfire. According to Michigan State University, productivity can fall so sharply that "an employee working 70 hours per week has almost no difference in output than an employee working 50 hours per week".This isn't a new concept. A century ago, Henry Ford set an example that other major industrialists would follow, when he cut working hours for staff in his car factories and adopted a 40-hour, five-day week.100-hour weeksNevertheless, there are those who believe that British companies today could take a leaf out of the US tech sector's book.For example, the co-founder and former CEO of BrewDog, James Watt, posted a widely-shared video in which he said: "I think the whole concept of work-life balance was invented by people who hate the work they do. So if you love what you do, you don't need work life balance, you need work-life integration."He subsequently pointed to a study by academics at King's College London that showed people in the UK are among the least likely to believe work should always come first. He said it showed the UK to be "one of the world's least work-oriented countries".In a BBC documentary in 2022, Watt himself was accused of inappropriate behaviour and abuse of power in the workplace. He apologised to anyone who felt uncomfortable because of his behaviour but hit out at "false rumours and misinformation". BrewDog's new chief executive James Taylor said last year the business is "well past" its previous controversies.HBOHBO's TV drama Industry, written by two former investment bankers, highlights the tough work culture at some financial institutionsFor some in the UK, talk of 996 culture could feel rather familiar. Jobs at large corporate law firms here pay high salaries, but many demand long hours in return. According to a survey carried out last year by the website Legal Cheek, it is not uncommon for average working days to be 12 hours or more.Investment banking - the wheeling and dealing side of the financial industry, which looks after mergers, acquisitions and stock market launches - is also notorious for long hours. Industry sources suggest 65 to 70 hours a week are relatively common, and can extend up to 100 hours when a major deal is being finalised.'Working smarter'?Is it legal here? UK law, under the working time regulations, states that most employees should not have to work more than 48 hours a week on average. But people can choose to opt out and work longer if they choose. So provided the employee gives their consent, 996 is allowed.However Ben Wilmott, head of public policy at the human resources professionals' association CIPD, thinks it is wrong to believe that long hours lead to better performance."There doesn't seem to be any correlation at all between working long hours and productivity", he says. "There is quite good evidence which shows there's a risk of ill health if you work long hours… there's a higher risk of stroke and heart disease."So I think the focus should be on working smarter rather than longer… improving management capability, technology adoption, adoption of AI to improve productivity, rather than a focus on increasing working hours."Universal Images GroupSeeing exhausted 'salarymen' fast asleep on the Tokyo underground is not an uncommon sightSome campaigners believe the UK could actually benefit from a reduction in working hours, and the adoption of a four-day week.They point to the results of a pilot project carried out in 2022, in which 61 organisations agreed to cut working hours for all staff for six months without any reduction in pay.It concluded that this significantly reduced stress and illness in the workforce, and helped companies retain staff, without losing productivity.Recruitment expert Adrian Kinnersley believes the current enthusiasm for 996 remains largely confined to the technology sector, and for a good reason."Whether you need to work 80 hours a week is debatable, but I think you would struggle in the current environment to compete with a relaxed 35-hour week culture," he says.For Browser-Use founder Magnus Müller meanwhile, the hours he and his peers work in Silicon Valley are really nothing remarkable."I'm from a tiny village in south Germany", he says. "The farmers there, they get up at five every day and work more than 12 hours per day, seven days a week. And they don't take any holidays, or maybe just two to three days when they can get someone to take care of their cows."So I think there are so many industries where people have so much harder jobs, and struggle so much harder, and work so much harder. I would say it's more like kindergarten, what we are doing compared to them."Top image credit: Getty.BBC InDepth is the home on the website and app for the best analysis, with fresh perspectives that challenge assumptions and deep reporting on the biggest issues of the day. Emma Barnett and John Simpson bring their pick of the most thought-provoking deep reads and analysis, every Saturday. Sign up for the newsletter hereArtificial intelligenceSilicon ValleyWork-life balanceTechnology The tech firms embracing a 72-hour working week The recruitment website is jazzy, awash with pictures of happy young workers, and festooned with upbeat mini-slogans such as "insane speed", "infinite curiosity" and "customer obsession". Read a bit lower, and there are promises of perks galore: competitive compensation, free meals, free gym membership, free health and dental care and so on. But then comes the catch. Each job ad contains a warning: "Please don't join if you're not excited about… working ~70 hrs/week in person with some of the most ambitious people in NYC." The website belongs to Rilla, a New York-based tech business which sells AI-based systems that allow employers to monitor sales representatives when they are out and about, interacting with clients. The company has become something of a poster child for a fast-paced workplace culture known as 996, also sometimes referred to as hustle culture or grindcore. In simple terms, it puts a premium on long working hours, typically 9am to 9pm, six days a week (hence "996"). For most of us, that would be gruelling. But according to Will Gao, head of growth at Rilla, its 120 employees simply don't see it that way. "We look for people who are like Olympian athletes, with characteristics of, you know, obsession, infinite ambition. "It's people who want to do incredible things and have a lot of fun while doing so," he says. He insists that while the hours are generally long, there's no rigid structure. "If I'm like, 'Holy cow, I have a super idea I'm working on', then I'll just keep working until 2 or 3am, then I'll just roll in the next day at noon or something", he explains. This kind of approach has become extremely popular in the technology sector over the past few years, and for good reason. The development of artificial intelligence (AI) has been taking place at a breakneck pace, and companies around the world are now working flat-out to develop ways in which it can be exploited and monetised. Huge amounts of money are being ploughed into AI ventures, many of them start-ups. But for every ambitious company founder, the ever-present fear is that someone else will get there first. Speed is of the essence – and tech sector workers are under pressure to work harder, and longer, to get results quickly. 'Slackers are not my brothers' The 996 culture first came to the fore in China a decade ago. It was embraced by tech companies and start-ups at a time when the country was increasingly focused on transforming itself from the world's workshop for cheap goods into a leader in advanced technologies. It had some powerful advocates. They included Jack Ma, the billionaire founder of the retail behemoth Alibaba.com. "I personally think that being able to work 996 is a huge blessing", he wrote in one blog post for employees. "It's not just entrepreneurs; most successful or ambitious artists, scientists, athletes, officials, and politicians work 996 or more", he said in another. "It's not because they have extraordinary perseverance, but because they are deeply passionate about their chosen careers", he added. Another enthusiast was Richard Liu, founder of the retail behemoth JD.com, who at one point railed against what he saw as the country's declining work ethic. "Slackers are not my brothers!", he wrote in a controversial email to staff in 2019. But such an attitude prompted a backlash, including a wave of online complaints that companies were ignoring labour laws and failing to pay overtime, while forcing employees to work excessive hours. By 2021 this chorus of disapproval had become too loud for the authorities to ignore – and prompted a legal crackdown. In China, 996 has not disappeared, but its advocates have generally been a lot quieter. A notable exception was Baidu's one-time head of public relations, Qu Jing, who posted a series of videos on social media in 2024, aggressively defending a hard-working culture. Her brusque dismissal of employees' wellbeing, with the comment "I'm not your mother, I only care about results" provoked outrage. She later apologised, but it ultimately cost Qu her job. Yet the culture still has fans elsewhere. Last year, Narayana Murthy, the founder of Indian software giant Infosys, spoke admiringly about China's use of 996. He remarked in a TV interview that "no individual, no community, no country has ever come up without hard work". The AI gold rush So why has the US tech industry chosen to embrace the trend? A key factor appears to have been the headlong rush to develop ways of using AI. "It's mainly AI companies," explains Adrian Kinnersley, who runs recruitment businesses in Europe and North America. "It's those that have some funding from venture capitalists, that are in a race to develop their products and get them out to market before someone else beats them to it. That's led them into the idea that, if you work longer hours, you win the race." One of those AI start-ups is run by Magnus Müller, a young German-born entrepreneur. He co-founded Browser-Use, a business that is developing tools to help AI applications interact with web browsers. He lives in a "hacker-house", a shared living and workspace, where he and his colleagues continually swap ideas, and believes working long hours is just a fact of life. "I think it's hard, what we're trying to build. I think it's the problems we're trying to solve, giving AI these extra capabilities. It's super hard, and very competitive, and most often the returns come when you just immerse yourself very deeply into a problem… then suddenly fascinating things happen." Browser-Use currently has just seven staff, but it is recruiting more. Müller says he is looking for people with the same kind of mentality as himself. Anyone who wants to work a 40-hour week, he says, is unlikely to fit in. "We really look for people who are just addicted, who love what they're doing", he emphasises. "It's like gaming, OK? It's like you're addicted to gaming… for us, it doesn't really feel like work. We just do what we love." Others disagree. Deedy Das is a partner at Menlo Ventures, a venture capital firm which has a near 50-year record of investing in technology businesses. He thinks the most common mistake young entrepreneurs make is insisting their employees work 996-style hours. "I think the thing young founders get wrong is they view hours worked in and of itself as necessary and sufficient to think of themselves as productive. And that's where the fallacy lies", he explains. "Forcing your employees to come in, and hustle, is a downstream artefact of such a mindset." He thinks such an approach can alienate those with families, as well as experienced older workers who "can actually work far less and achieve much more because they know what they're doing". He adds that continual long hours will lead to long-term burnout. However, he concedes that for company founders themselves, with skin in the game and the potential to become very wealthy if their business succeeds, different rules apply. "Frankly, I would be shocked if a founder wasn't working 70-80 hours per week. I can personally say… if I'm investing in an early-stage founder, if they're not working 70-80 hours a week, it's probably not a great investment." Academic and author on workplace culture, Tamara Myles, says hustle culture is unsustainable, especially if people feel compelled to be working at all times. But she concedes there are grey areas. The contradiction at the heart of the trillion-dollar AI race The UK car industry is at a tipping point - can it be saved? Asda has lost its mojo and has a big fight to get it back "The nuance here is that a lot of these tech companies that are living this 996 culture are actually not hiding it, they're advertising it. They're selling it as a badge of honour, almost," she says. But that doesn't mean everyone who agrees to work 996 actually wants to, she argues. "You might be staying because the job market is tough right now, or you might be here for a visa, and you depend on it. So there might be power dynamics at play." Health risks Yet those who do choose to burn the midnight oil could end up paying a heavy price. Concerns over the health impact of working long hours are certainly not new. In Japan, a country with a long-established hard-work culture, where so-called salarymen have notoriously supported the post-war economy with utter dedication to their employers, there is even a word for it: Karōshi. It means death through overwork, and refers principally to strokes and heart attacks suffered by people working very long hours. Karōjisatsu, meanwhile, refers to people taking or attempting to take their own lives due to workplace stress. Both are recognised in Japanese law, and families are in theory eligible for compensation from a government scheme, although in practice proving a death derives from overwork can be difficult. More broadly, analysis published in 2021 by the World Health Organization (WHO) and the International Labour Organization (ILO) concluded that long working hours - defined as more than 55 hours per week - had led to 745,000 deaths worldwide from stroke and heart disease in 2016. It concluded that working 55 hours or more a week increased the risk of dying of heart disease by 17% compared to working 35-40 hours, and raised the risk of a stroke by 35%. The productivity threshold Then, there's productivity - broadly speaking the amount that actually gets done for every hour worked. Studies have shown that as hours go up, productivity initially increases - but once a threshold is reached it starts to decline again as physical and mental exhaustion sets in. The 'sweet spot' is widely recognised to be around 40 hours per week. As one recent study put it: "At around 40h per five-day work week, workers seem to be able to maintain productivity fairly well, but when individuals exceed this threshold and engage in longer work hours, their job performance gradually weakens because of increased fatigue and underprivileged health conditions." In other words, once this threshold is reached, the extra output from every hour worked starts to decline. Even so, there will always be a temptation for companies to employ fewer people and get them to work for longer. This is because each extra employee comes at a cost: they have to be recruited, trained if necessary, and paid. But research suggests this approach can backfire. According to Michigan State University, productivity can fall so sharply that "an employee working 70 hours per week has almost no difference in output than an employee working 50 hours per week". This isn't a new concept. A century ago, Henry Ford set an example that other major industrialists would follow, when he cut working hours for staff in his car factories and adopted a 40-hour, five-day week. 100-hour weeks Nevertheless, there are those who believe that British companies today could take a leaf out of the US tech sector's book. For example, the co-founder and former CEO of BrewDog, James Watt, posted a widely-shared video in which he said: "I think the whole concept of work-life balance was invented by people who hate the work they do. So if you love what you do, you don't need work life balance, you need work-life integration." He subsequently pointed to a study by academics at King's College London that showed people in the UK are among the least likely to believe work should always come first. He said it showed the UK to be "one of the world's least work-oriented countries". In a BBC documentary in 2022, Watt himself was accused of inappropriate behaviour and abuse of power in the workplace. He apologised to anyone who felt uncomfortable because of his behaviour but hit out at "false rumours and misinformation". BrewDog's new chief executive James Taylor said last year the business is "well past" its previous controversies. For some in the UK, talk of 996 culture could feel rather familiar. Jobs at large corporate law firms here pay high salaries, but many demand long hours in return. According to a survey carried out last year by the website Legal Cheek, it is not uncommon for average working days to be 12 hours or more. Investment banking - the wheeling and dealing side of the financial industry, which looks after mergers, acquisitions and stock market launches - is also notorious for long hours. Industry sources suggest 65 to 70 hours a week are relatively common, and can extend up to 100 hours when a major deal is being finalised. 'Working smarter'? Is it legal here? UK law, under the working time regulations, states that most employees should not have to work more than 48 hours a week on average. But people can choose to opt out and work longer if they choose. So provided the employee gives their consent, 996 is allowed. However Ben Wilmott, head of public policy at the human resources professionals' association CIPD, thinks it is wrong to believe that long hours lead to better performance. "There doesn't seem to be any correlation at all between working long hours and productivity", he says. "There is quite good evidence which shows there's a risk of ill health if you work long hours… there's a higher risk of stroke and heart disease. "So I think the focus should be on working smarter rather than longer… improving management capability, technology adoption, adoption of AI to improve productivity, rather than a focus on increasing working hours." Some campaigners believe the UK could actually benefit from a reduction in working hours, and the adoption of a four-day week. They point to the results of a pilot project carried out in 2022, in which 61 organisations agreed to cut working hours for all staff for six months without any reduction in pay. It concluded that this significantly reduced stress and illness in the workforce, and helped companies retain staff, without losing productivity. Recruitment expert Adrian Kinnersley believes the current enthusiasm for 996 remains largely confined to the technology sector, and for a good reason. "Whether you need to work 80 hours a week is debatable, but I think you would struggle in the current environment to compete with a relaxed 35-hour week culture," he says. For Browser-Use founder Magnus Müller meanwhile, the hours he and his peers work in Silicon Valley are really nothing remarkable. "I'm from a tiny village in south Germany", he says. "The farmers there, they get up at five every day and work more than 12 hours per day, seven days a week. And they don't take any holidays, or maybe just two to three days when they can get someone to take care of their cows. "So I think there are so many industries where people have so much harder jobs, and struggle so much harder, and work so much harder. I would say it's more like kindergarten, what we are doing compared to them." Top image credit: Getty. BBC InDepth is the home on the website and app for the best analysis, with fresh perspectives that challenge assumptions and deep reporting on the biggest issues of the day. Emma Barnett and John Simpson bring their pick of the most thought-provoking deep reads and analysis, every Saturday. Sign up for the newsletter here Tumbler Ridge suspect's ChatGPT account banned before shooting 'Breweries using AI could put artists out of work' Urgent research needed to tackle AI threats, says Google AI boss Burnout brought me to my knees - walking saved my life Dean Cooper shares how Cornwall's country lanes helped him deal with burnout and stress. Why fake AI videos of UK urban decline are taking over social media Deepfakes showing grim taxpayer-funded waterparks have gone viral and drawn some racist responses. Fixing fashion's erratic sizing problem Tech Now meets a startup trying to fix one of the fashion industry's biggest blind spots, inconsistent sizing. The Chinese AI app sending Hollywood into a panic Clips of Deadpool and other film characters have sparked alarm within Hollywood over copyright infringement. Tech firms will have 48 hours to remove abusive images under new law The government is proposing that intimate image abuse should be treated more severely. Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Xbox_360] | [TOKENS: 8080] |
Contents Xbox 360 The Xbox 360 is a home video game console developed by Microsoft, being the successor to the original Xbox and the second console in the Xbox series. It was officially unveiled on MTV in a program titled MTV Presents Xbox: The Next Generation Revealed on May 12, 2005, with detailed launch and game information announced later that month at the 2005 Electronic Entertainment Expo (E3). As a seventh-generation console, it primarily competed with Sony's PlayStation 3 and Nintendo's Wii. The Xbox 360's online service, Xbox Live, was expanded from its previous iteration on the original Xbox and received regular updates during the console's lifetime. Available in free and subscription-based varieties, Xbox Live allows users to play games online; download games (through Xbox Live Arcade) and game demos; purchase and stream music, television programs, and films through the Xbox Music and Xbox Video portals; and access third-party content services through media streaming applications. In addition to online multimedia features, it allows users to stream media from local PCs. Several peripherals have been released, including wireless controllers, expanded hard drive storage, and the Kinect motion sensing camera. The release of these additional services and peripherals helped the Xbox brand grow from gaming-only to encompassing all multimedia, turning it into a hub for living-room computing entertainment. Launched worldwide mostly between November 2005 and December 2006, the Xbox 360 was initially in short supply in many regions, including North America and Europe. The earliest versions of the console suffered from a high failure rate, indicated by the so-called "Red Ring of Death", necessitating an extension of the device's warranty period. Microsoft released two redesigned models of the console: the Xbox 360 S in 2010, and the Xbox 360 E in 2013. The Xbox 360 is the ninth-highest-selling home video game console in history, and the highest-selling console made by an American company and by Microsoft. Although not the best-selling console of its generation, the Xbox 360 was deemed by TechRadar to be the most influential through its emphasis on digital media distribution and multiplayer gaming on Xbox Live. The Xbox 360's successor, the Xbox One, was released on November 22, 2013. On April 20, 2016, Microsoft announced that it would end the production of new Xbox 360 hardware, although the company will continue to support the platform. On August 17, 2023, Microsoft announced that on July 29, 2024, the Xbox 360 game marketplace would stop offering new purchases and the Microsoft Movies & TV app will no longer function, though the console will still be able to download previously purchased content and enter multiplayer sessions. History Known during development as Xbox Next, Xenon, Xbox 2, Xbox FS or NextBox, the Xbox 360 was conceived in early 2003. In February 2003, planning for the Xenon software platform began, and was headed by Microsoft's Vice President J Allard. That month, Microsoft held an event for 400 developers in Bellevue, Washington to recruit support for the system. Also that month, Peter Moore, former president of Sega of America, joined Microsoft. On August 12, 2003, ATI signed on to produce the graphics processing unit for the new console, a deal that was publicly announced two days later. Before the launch of the Xbox 360, several Alpha development kits were spotted using Apple's Power Mac G5 hardware. This was because the system's PowerPC 970 processor was running the same PowerPC architecture that the Xbox 360 would eventually run under IBM's Xenon processor. The cores of the Xenon processor were developed using a slightly modified version of the PlayStation 3's Cell Processor PPE architecture. According to David Shippy and Mickie Phipps, the IBM employees were "hiding" their work from Sony and Toshiba, IBM's partners in developing the Cell Processor. Jeff Minter created the music visualization program Neon which is included with the Xbox 360. The Xbox 360 was released on November 22, 2005, in the United States and Canada; December 2, 2005, in Europe and December 10, 2005, in Japan. It was later launched in Mexico, Brazil, Chile, Colombia, Hong Kong, Singapore, South Korea, Taiwan, Australia, New Zealand, South Africa, India, and Russia. In its first year in the market, the system was launched in 36 countries, more countries than any other console has launched in a single year. In 2009, IGN named the Xbox 360 the sixth-greatest video game console of all time, out of a field of 25. Although not the best-selling console of the seventh generation, the Xbox 360 was deemed by TechRadar to be the most influential, by emphasizing digital media distribution and online gaming through Xbox Live, and by popularizing game achievement awards. PC Magazine considered the Xbox 360 the prototype for online gaming as it "proved that online gaming communities could thrive in the console space". Five years after the Xbox 360's debut, the well-received Kinect motion capture camera was released, which set the record of being the fastest selling consumer electronic device in history, and extended the life of the console. Edge ranked Xbox 360 the second-best console of the 1993–2013 period, stating "It had its own social network, cross-game chat, new indie games every week, and the best version of just about every multiformat game ... Killzone is no Halo and nowadays Gran Turismo is no Forza, but it's not about the exclusives—there's nothing to trump Naughty Dog's PS3 output, after all. Rather, it's about the choices Microsoft made back in the original Xbox's lifetime. The PC-like architecture meant the early EA Sports games ran at 60fps compared to only 30 on PS3, Xbox Live meant every dedicated player had an existing friends list, and Halo meant Microsoft had the killer next-generation exclusive. And when developers demo games on PC now they do it with a 360 pad—another industry benchmark, and a critical one." The Xbox 360 began production only 69 days before launch, on September 14, 2005, and Microsoft was not able to supply enough systems to meet initial consumer demand in Europe or North America, selling out completely upon release in all regions except in Japan. Forty thousand units were offered for sale on auction site eBay during the initial week of release, 10% of the total supply. By year's end, Microsoft had shipped 1.5 million units, including 900,000 in North America, 500,000 in Europe, and 100,000 in Japan. In May 2008, Microsoft announced that 10 million Xbox 360s had been sold and that it was the "first current generation gaming console" to surpass the 10 million figure in the US. In the US, the Xbox 360 was the leader in current-generation home console sales until June 2008, when it was surpassed by the Wii. By the end of March 2011, Xbox 360 sales in the US had reached 25.4 million units. Between January 2011 and October 2013, the Xbox 360 was the best-selling console in the United States for these 32 consecutive months. By the end of 2014, Xbox 360 sales had surpassed sales of the Wii, making the Xbox 360 the best-selling 7th-generation console in the US once again. In Canada, the Xbox 360 has sold a total of 870,000 units as of August 1, 2008. According to data from Circana, lifetime Xbox 360 sales in the United States reached 42.7 million units. In Europe, the Xbox 360 has sold seven million units as of November 20, 2008. The Xbox 360 took 110 weeks to reach 2 million units sold in the UK, generating £507m in revenue. Sales in the United Kingdom would reach 3.2 million units by January 2009, per GfK Chart-Track. The 8 million unit mark was crossed in the UK by February 2013. Sales of the Xbox 360 would overtake the Wii later that year, topping 9 million units, making the Xbox 360 the best-selling 7th-generation console in the UK, as well as making it the third best-selling console of all time in the region, behind the PS2 and Nintendo DS. Over 1 million units were sold in Spain across the console's lifecycle. The Xbox 360 crossed the 1 million units sold in Japan in March 2009, and the 1.5 million units sold in June 2011. Lifetime sales of the Xbox 360 in Japan stand at 1,616,218 million units. While the Xbox 360 has sold poorly in Japan, it improved upon the sales of the original Xbox, which had total sales of 474,992 units. Furthermore, the Xbox 360 managed to outsell both the PlayStation 3 and Wii the week ending September 14, 2008, as well as the week ending February 22, 2009, when the Japanese Xbox 360 exclusives Infinite Undiscovery and Star Ocean: The Last Hope, were released those weeks, respectively. Ultimately, Edge magazine would report that Microsoft had been unable to make serious inroads into the dominance of domestic rivals Sony and Nintendo; adding that lackluster sales in Japan had led to retailers scaling down and in some cases, discontinuing sales of the Xbox 360 completely. The significance of Japan's poor sales might be overstated in the media in comparison to overall international sales. The Xbox 360 sold much better than its predecessor, and although not the best-selling console of the seventh generation, it is regarded as a success since it strengthened Microsoft as a major force in the console market at the expense of well-established rivals. The inexpensive Wii did sell the most console units but eventually saw a collapse of third-party software support in its later years, and it has been viewed by some as a fad since the succeeding Wii U had a poor debut in 2012. The PlayStation 3 struggled for a time due to being too expensive and initially lacking quality games, making it far less dominant than its predecessor, the PlayStation 2, and it took until late in the PlayStation 3's lifespan for its sales and games to reach parity with the Xbox 360. TechRadar proclaimed that "Xbox 360 passes the baton as the king of the hill – a position that puts all the more pressure on its successor, Xbox One". The Xbox 360's advantage over its competitors was due to the release of high-profile games from both first party and third-party developers. The 2007 Game Critics Awards honored the platform with 38 nominations and 12 wins – more than any other platform. By March 2008, the Xbox 360 had reached a software attach rate of 7.5 games per console in the US; the rate was 7.0 in Europe, while its competitors were 3.8 (PS3) and 3.5 (Wii), according to Microsoft. At the 2008 Game Developers Conference, Microsoft announced that it expected over 1,000 games available for Xbox 360 by the end of the year. As well as enjoying exclusives such as additions to the Halo franchise and Gears of War, the Xbox 360 has managed to gain a simultaneous release of games that were initially planned to be PS3 exclusives, including Devil May Cry 4, Ace Combat 6, Virtua Fighter 5, Grand Theft Auto IV, Final Fantasy XIII, Tekken 6, Metal Gear Rising: Revengeance, and L.A. Noire. In addition, Xbox 360 versions of cross-platform games were generally considered superior to their PS3 counterparts in 2006 and 2007, due in part to the difficulties of programming for the PS3. TechRadar deemed the Xbox 360 as the most influential game system through its emphasis of digital media distribution, Xbox Live online gaming service, and game achievement feature. During the console's lifetime, the Xbox brand has grown from gaming-only to encompassing all multimedia, turning it into a hub for "living-room computing environment". Microsoft announced the successor to the Xbox 360, the Xbox One, on May 21, 2013. On April 20, 2016, Microsoft announced the end of production of new Xbox 360 hardware, though the company will continue to provide hardware and software support for the platform as selected Xbox 360 games are playable on Xbox One. The Xbox 360 continued to be supported by major publishers with new games well into the Xbox One's lifecycle. New titles were still being released in 2018. The Xbox 360 continues to have an active player base years after the system's discontinuation. Speaking to Engadget at E3 2019 after the announcement of Project Scarlett, the next-generation of Xbox consoles after the Xbox One, Phil Spencer stated that there were still "millions and millions of players" active on the Xbox 360. After the launch of the Xbox Series X and S by the end of 2020, the Xbox 360 still had a 17.7% market share of all consoles in use in Mexico; comparatively, newer systems like the Xbox One and PlayStation 4 stood at 36.9% and 18.0% market share, respectively. Hardware The main unit of the Xbox 360 itself has slight double concavity in matte white or black. The official color of the white model is Arctic Chill. It features a port on the top when vertical (left side when horizontal) to which a custom-housed hard disk drive unit can be attached. On the Slim and E models, the hard drive bay is on the bottom when vertical (right side when horizontal) and requires the opening of a concealed door to access it. (This does not void the warranty.) The Xbox 360 Slim/E hard drives are standard 2.5" SATA laptop drives, but have a custom enclosure and firmware so that the Xbox 360 can recognize it. The Xbox 360 uses the triple-core IBM designed Xenon as its CPU, with each core capable of simultaneously processing two threads, and can therefore operate on up to six threads at once. Graphics processing is handled by the ATI Xenos, which has 10 MB of eDRAM. Its main memory pool is 512 MB in size. Originally, the Xbox 360 was equipped with only 256 MB of RAM, but Epic, the Gears of War developer, demonstrated to Microsoft that the console should have 512 MB of RAM to deliver much better performance. When asked about this, Epic Games Executive Vice President Mark Rein said in 2006: "So the day they made the decision, we were apparently the first developer they called; we were at Game Developers Conference, was it two years ago, and then I got a call from the chief financial officer of MGS and he said 'I just want you to know you cost me a billion dollars' and I said, 'we did a favour for a billion gamers'." Various hard disk drives have been produced, including options at 20, 60, 120, 250, 320, or 500 GB. Many accessories are available for the console, including both wired and wireless controllers, faceplates for customization, headsets for chatting, a webcam for video chatting, dance mats and Gamercize for exercise, three sizes of memory units and six sizes of hard drives (20, 60, 120, 250 (initially Japan only, but later also available elsewhere), 320, and 500 GB), among other items, all of which are styled to match the console. In 2006, Microsoft released the Xbox 360 HD DVD Player. The accessory was discontinued in 2008 after the format war had ended in Blu-ray's favor. Kinect is a "controller-free gaming and entertainment experience" for the Xbox 360. It was first announced on June 1, 2009, at the Electronic Entertainment Expo, under the codename, Project Natal. The add-on peripheral enables users to control and interact with the Xbox 360 without a game controller by using gestures, spoken commands and presented objects and images. The Kinect accessory is compatible with all Xbox 360 models, connecting to new models via a custom connector, and to older ones via a USB and mains power adapter. During their CES 2010 keynote speech, Robbie Bach and Microsoft CEO Steve Ballmer went on to say that Kinect would be released during the holiday period (November–January) and work with every Xbox 360 console. It was released on November 4, 2010. Built-in Through AV connector At launch, the Xbox 360 was available in two configurations: the "Xbox 360" package (unofficially known as the 20 GB Pro or Premium), priced at US$399 or £279.99, and the "Xbox 360 Core", priced at US$299 and £209.99. The original shipment of the Xbox 360 version included a cut-down version of the Media Remote as a promotion. The Elite package was launched later at US$479. The "Xbox 360 Core" was replaced by the "Xbox 360 Arcade" in October 2007 and a 60 GB version of the Xbox 360 Pro was released on August 1, 2008. The Pro package was discontinued and marked down to US$249 on August 28, 2009, to be sold until stock ran out, while the Elite was also marked down in price to US$299. Two major hardware revisions of the Xbox 360 have succeeded the original models; the Xbox 360 S (also referred to as the "Slim") replaced the original "Elite" and "Arcade" models in 2010. The S model carries a smaller, streamlined appearance with an angular case, and utilizes a redesigned motherboard designed to alleviate the hardware and overheating issues experienced by prior models. It also includes a proprietary port for use with the Kinect sensor. The Xbox 360 E, a further streamlined variation of the 360 S with a two-tone rectangular case inspired by Xbox One, was released in 2013. In addition to its revised aesthetics, the Xbox 360 E also has one fewer USB port and no longer supports S/PDIF. November 22, 2005 April 29, 2007 August 6, 2007 October 27, 2007 July 13, 2008 August 1, 2008 September 5, 2008 August 28, 2009 June 19, 2010 August 3, 2010 June 10, 2013 April 20, 2016 The original model of the Xbox 360 has been subject to a number of technical problems. Since the console's release in 2005, users have reported concerns over its reliability and failure rate. To aid customers with defective consoles, Microsoft extended the Xbox 360's manufacturer's warranty to three years for hardware failure problems that generate a "General Hardware Failure" error report. A "General Hardware Failure" is recognized on all models released before the Xbox 360 S by three quadrants of the ring around the power button flashing red. This error is often known as the "Red Ring of Death". In April 2009, the warranty was extended to also cover failures related to the E74 error code. The warranty extension is not granted for any other types of failures that do not generate these specific error codes. After these problems surfaced, Microsoft attempted to modify the console to improve its reliability. Modifications included a reduction in the number, size, and placement of components, the addition of dabs of epoxy on the corners and edges of the CPU and GPU as glue to prevent movement relative to the board during heat expansion, and a second GPU heatsink to dissipate more heat. With the release of the redesigned Xbox 360 S, the warranty for the newer models does not include the three-year extended coverage for "General Hardware Failures". The newer Xbox 360 S and E models indicate system overheating when the console's power button begins to flash red, unlike previous models where the first and third quadrant of the ring would light up red around the power button if overheating occurred. The system will then warn the user of imminent system shutdown until the system has cooled, whereas a flashing power button that alternates between green and red is an indication of a "General Hardware Failure" unlike older models where three of the quadrants would light up red. Software The Xbox 360 launched with 14 games in North America and 13 in Europe. The console's best-selling game for 2005, Call of Duty 2, sold over a million copies. Five other games sold over a million copies in the console's first year on the market: Ghost Recon Advanced Warfighter, The Elder Scrolls IV: Oblivion, Dead or Alive 4, Saints Row, and Gears of War. Gears of War would become the best-selling game on the console with 3 million copies in 2006, before being surpassed in 2007 by Halo 3 with over 8 million copies. Six games were initially available in Japan, while eagerly anticipated games such as Dead or Alive 4 and Enchanted Arms were released in the weeks following the console's launch. Games targeted specifically for the region, such as Chromehounds, Ninety-Nine Nights, and Phantasy Star Universe, were also released in the console's first year. Microsoft also had the support of Japanese developer Mistwalker, founded by Final Fantasy creator Hironobu Sakaguchi. Mistwalker's first game, Blue Dragon, was released in 2006 and had a limited-edition bundle which sold out quickly with over 10,000 pre-orders. Blue Dragon is one of three Xbox 360 games to surpass 200,000 units in Japan, along with Tales of Vesperia and Star Ocean: The Last Hope. Mistwalker's second game, Lost Odyssey also sold over 100,000 copies. The 2007 Game Critics Awards honored the Xbox 360 platform with 38 Nominations and 11 Wins. By 2015, game releases started to decline as most publishers instead focused on the Xbox One. The last official game released for the system was Just Dance 2019, released on October 23, 2018, in North America, and October 25 in Europe and Australia. As one of the late updates to the software following its discontinuation, Microsoft will add the ability for Xbox 360 users to use cloud saves even if they do not have Xbox Live Gold prior to the launch of the Xbox Series X and Series S in November 2020. The new consoles will have backward compatibility for all Xbox 360 games that are already backward compatible on the Xbox One and can use any Xbox 360 game's cloud saves through this update, making the transition to the new consoles easier. The Xbox 360's original graphical user interface was the Xbox 360 Dashboard; a tabbed interface that featured five "Blades" (formerly four blades), and was designed by AKQA and Audiobrain. It could be launched automatically when the console booted without a disc in it, or when the disc tray was ejected, but the user had the option to select what the console does if a game is in the tray on start up, or if inserted when already on. A simplified version of it was also accessible at any time via the Xbox Guide button on the gamepad. This simplified version showed the user's gamercard, Xbox Live messages and friends list. It also allowed for personal and music settings, in addition to voice or video chats, or returning to the Xbox Dashboard from the game. On November 19, 2008, the Xbox 360's dashboard was changed from the "Blade" interface to a dashboard reminiscent of that present on the Zune and Windows Media Center, known as the "New Xbox Experience" or NXE. Since the console's release, Microsoft has released several updates for the Dashboard software. These updates have included adding new features to the console, enhancing Xbox Live functionality and multimedia playback capabilities, adding compatibility for new accessories, and fixing bugs in the software. Such updates are mandatory for users wishing to use Xbox Live, as access to Xbox Live is disabled until the update is performed.[citation needed] At E3 2008, at Microsoft's Show, Microsoft's Aaron Greenberg and Marc Whitten announced the new Xbox 360 interface called the "New Xbox Experience" (NXE). The update was intended to ease console menu navigation. Its GUI uses the Twist UI, previously used in Windows Media Center and the Zune. Its new Xbox Guide retains all Dashboard functionality (including the Marketplace browser and disk ejection) and the original "Blade" interface (although the color scheme has been changed to match that of the NXE Dashboard). The NXE also provides many new features. Users can now install games from disc to the hard drive to play them with reduced load time and less disc drive noise, but each game's disc must remain in the system in order to run. A new, built-in Community system allows the creation of digitized Avatars that can be used for multiple activities, such as sharing photos or playing Arcade games like 1 vs. 100. The update was released on November 19, 2008. While previous system updates have been stored on internal memory, the NXE update was the first to require a storage device—at least a 128 MB memory card or a hard drive. Microsoft released a further update to the Xbox 360 Dashboard starting on December 6, 2011. It included a completely new user interface which utilizes Microsoft's Metro design language and added new features such as cloud storage for game saves and profiles, live television, Bing voice search, access to YouTube videos and better support for Kinect voice commands. The Xbox 360 supports videos in Windows Media Video (WMV) format (including high-definition and PlaysForSure videos), as well as H.264 and MPEG-4 media. The December 2007 dashboard update added support for the playback of MPEG-4 ASP format videos. The console can also display pictures and perform slideshows of photo collections with various transition effects, and supports audio playback, with music player controls accessible through the Xbox 360 Guide button. Users may play back their own music while playing games or using the dashboard and can play music with an interactive visual synthesizer. Music, photos and videos can be played from standard USB mass storage devices, Xbox 360 proprietary storage devices (such as memory cards or Xbox 360 hard drives), and servers or computers with Windows Media Center or Windows XP with Service pack 2 or higher within the local-area network in streaming mode. As the Xbox 360 uses a modified version of the UPnP AV protocol,[unreliable source?] some alternative UPnP servers such as uShare (part of the GeeXboX project) and MythTV can also stream media to the Xbox 360, allowing for similar functionality from non-Windows servers. This is possible with video files up to HD-resolution and with several codecs (MPEG-2, MPEG-4, WMV) and container formats (WMV, MOV, TS). As of October 27, 2009, UK and Ireland users are also able to access live and on-demand streams of Sky television programming. At the 2007, 2008, and 2009 Consumer Electronics Shows, Microsoft had announced that IPTV services would soon be made available to use through the Xbox 360. In 2007, Microsoft chairman Bill Gates stated that IPTV on Xbox 360 was expected to be available to consumers by the holiday season, using the Microsoft TV IPTV Edition platform. In 2008, Gates and president of Entertainment & Devices Robbie Bach announced a partnership with BT in the United Kingdom, in which the BT Vision advanced TV service, using the newer Microsoft Mediaroom IPTV platform, would be accessible via Xbox 360, planned for the middle of the year. BT Vision's DVR-based features would not be available on Xbox 360 due to limited hard drive capacity. In 2010, while announcing version 2.0 of Microsoft Mediaroom, Microsoft CEO Steve Ballmer mentioned that AT&T's U-verse IPTV service would enable Xbox 360s to be used as set-top boxes later in the year. As of January 2010, IPTV on Xbox 360 has yet to be deployed beyond limited trials.[citation needed] In 2012, Microsoft released the Live Event Player, allowing for events such as video game shows, beauty pageants, award shows, concerts, news and sporting events to be streamed on the console via Xbox Live. The first live events streamed on Live were the 2012 Revolver Golden Gods, Microsoft's E3 2012 media briefing and the Miss Teen USA 2012 beauty pageant.[citation needed] XNA Community is a feature whereby Xbox 360 owners can receive community-created games, made with Microsoft XNA Game Studio, from the XNA Creators Club. The games are written, published, and distributed through a community managed portal. XNA Community provides a channel for digital videogame delivery over Xbox Live that can be free of royalties, publishers and licenses. XNA game sales, however, did not meet original expectations, though Xbox Live Indie Games (XBLIG) has had some "hits".[citation needed] Services When the Xbox 360 was released, Microsoft's online gaming service Xbox Live was shut down for 24 hours and underwent a major upgrade, adding a basic non-subscription service called Xbox Live Silver (later renamed Xbox Live Free) to its already established premium subscription-based service (which was renamed Gold). Xbox Live Free is included with all SKUs of the console. It allows users to create a user profile, join on message boards, and access Microsoft's Xbox Live Arcade and Marketplace and talk to other members. A Live Free account does not generally support multiplayer gaming; however, some games that have rather limited online functions already (such as Viva Piñata) and games that feature their own subscription service (e.g. EA Sports games) can be played with a Free account. Xbox Live also supports voice, a feature possible with the Xbox Live Vision. Xbox Live Gold includes the same features as Free and includes integrated online game playing capabilities outside of third-party subscriptions. Microsoft has allowed previous Xbox Live subscribers to maintain their profile information, friends list, and games history when they make the transition to Xbox Live Gold. To transfer an Xbox Live account to the new system, users need to link a Windows Live ID to their gamertag on Xbox.com. When users add an Xbox Live enabled profile to their console, they are required to provide the console with their passport account information and the last four digits of their credit card number, which is used for verification purposes and billing. An Xbox Live Gold account has an annual cost of US$59.99, C$59.99, NZ$90.00, £39.99, or €59.99. On January 5, 2011, Xbox Live reached over 30 million subscribers. The Xbox Live Marketplace was a virtual market designed for the console that allows Xbox Live users to download purchased or promotional content. The service offers movie and game trailers, game demos, Xbox Live Arcade games and Xbox 360 Dashboard themes as well as add-on game content (items, costumes, levels etc.). These features are available to both Free and Gold members on Xbox Live. A hard drive or memory unit is required to store products purchased from Xbox Live Marketplace. In order to download priced content, users are required to purchase Microsoft Points for use as scrip; though some products (such as trailers and demos) are free to download. Microsoft Points can be obtained through prepaid cards in 1,600 and 4,000-point denominations. Microsoft Points can also be purchased through Xbox Live with a credit card in 500, 1,000, 2,000 and 5,000-point denominations. Users are able to view items available to download on the service through a PC via the Xbox Live Marketplace website. An estimated 70 percent of Xbox Live users have downloaded items from the Marketplace. The Xbox 360 Marketplace was discontinued on July 29, 2024. Xbox Live Arcade is an online service operated by Microsoft that is used to distribute downloadable video games to Xbox and Xbox 360 owners. In addition to classic arcade games such as Ms. Pac-Man, the service offers some new original games like Assault Heroes. The Xbox Live Arcade also features games from other consoles, such as the PlayStation game Castlevania: Symphony of the Night and PC games such as Zuma. The service was first launched on November 3, 2004, using a DVD to load, and offered games for about US$5 to $15. Items are purchased using Microsoft Points, a proprietary currency used to reduce credit card transaction charges. On November 22, 2005, Xbox Live Arcade was re-launched with the release of the Xbox 360, in which it was now integrated with the Xbox 360's dashboard. The games are generally aimed toward more casual gamers; examples of the more popular games are Geometry Wars, Street Fighter II' Hyper Fighting, and Uno. On March 24, 2010, Microsoft introduced the Game Room to Xbox Live. Game Room is a gaming service for Xbox 360 and Microsoft Windows that lets players compete in classic arcade and console games in a virtual arcade. On November 6, 2006, Microsoft announced the Xbox Video Marketplace, an exclusive video store accessible through the console. Launched in the United States on November 22, 2006, the first anniversary of the Xbox 360's launch, the service allows users in the United States to download high-definition and standard-definition television shows and movies onto an Xbox 360 console for viewing. With the exception of short clips, content is not currently[when?] available for streaming, and must be downloaded. Movies are also available for rental. They expire in 14 days after download or at the end of the first 24 hours after the movie has begun playing, whichever comes first. Television episodes can be purchased to own, and are transferable to an unlimited number of consoles. Downloaded files use 5.1 surround audio and are encoded using VC-1 for video at 720p, with a bitrate of 6.8 Mbit/s. Television content is offered from MTV, VH1, Comedy Central, Turner Broadcasting, and CBS; and movie content is Warner Bros., Paramount, and Disney, along with other publishers. After the Spring 2007 update, the following video codecs are supported: As a late addition to the December Xbox 360 update, 25 movies were added to the European Xbox 360 video market place on the December 11, 2007 and cost 250 Microsoft points for the SD version of the movie and 380 points for the HD version of the movie. Xbox Live members in Canada featured the ability to go on the Xbox Live Marketplace also as of December 11, 2007 with around 30 movies to be downloaded for the same number of Microsoft Points. On May 26, 2009, Microsoft announced it would release the Zune HD (in the fall of 2009), which was then the next addition to the Zune product range. This was of an impact on the Xbox Live Video Store as it was also announced that the Zune Video Marketplace and the Xbox Live Video Store will be merged to form the Zune Marketplace, which will be arriving on Xbox Live in 7 countries initially, the United Kingdom, the United States, France, Italy, Germany, Ireland and Spain. Further details were released at the Microsoft press conference at E3 2009. On October 16, 2012, Xbox Video and Xbox Music were released, replacing the Zune Marketplace. Xbox Video is a digital video service on that offers full HD movies and TV series for purchase or rental on Xbox 360, Windows 8, Windows RT PCs and tablets, and Windows Phones. On August 18, 2015, Microsoft rolled out an update renaming it Movies and TV similar to the Windows 10 App. Xbox Music provides 30 million music tracks available for purchase or access through subscription. It was announced at the Electronic Entertainment Expo 2012 and it integrates with Windows 8 and Windows Phone as well. In August 2015 Microsoft rolled out an update renaming it to Groove Music similar to the Windows 10 App. Xbox SmartGlass allows for integration between the Xbox 360 console and mobile devices such as tablets and smartphones. An app is available on Android, Windows Phone 8 and iOS. Users of the feature can view additional content to accompany the game they are playing, or the TV shows and movies they are watching. They can also use their mobile device as a remote to control the Xbox 360. The SmartGlass functionality can also be found in the Xbox 360's successor, the Xbox One. Game development PartnerNet, the developers-only alternative Xbox Live network used by developers to beta test game content developed for Xbox Live Arcade, runs on Xbox 360 debug kits, which are used both by developers and by the gaming press. In a podcast released on February 12, 2007, a developer breached the PartnerNet non-disclosure agreement (NDA) by commenting that he had found a playable version of Alien Hominid and an unplayable version of Ikaruga on PartnerNet. A few video game journalists, misconstruing the breach of the NDA as an invalidation of the NDA, immediately began reporting on other games being tested via PartnerNet, including a remake of Jetpac. (Alien Hominid for the Xbox 360 was released on February 28 of that year, and Ikaruga was released over a year later on April 9, 2008. Jetpac was released for the Xbox 360 on March 28, 2007, as Jetpac Refuelled). There have also been numerous video and screenshot leaks of game footage on PartnerNet, as well as a complete version of Sonic the Hedgehog 4: Episode I, which caused for the whole PartnerNet service to be shut down overnight on April 3, 2010. In the following days, Microsoft reminded developers and journalists that they were in breach of NDA by sharing information about PartnerNet content and asked websites to remove lists of games in development that were discovered on the service. Sega used feedback from fans about the leaked version of Sonic the Hedgehog 4: Episode I to refine it before they eventually released it. Additionally, a pair of hackers played their modded Halo 3 games on PartnerNet in addition to using PartnerNet to find unreleased and untested software. The hackers passed this information along to their friends before they were eventually caught by Bungie. Consequently, Bungie left a message for the hackers on PartnerNet which read "Winners Don't Break Into PartnerNet". Other games that were leaked in the PartnerNet fiasco include Shenmue and Shenmue II. See also Further reading References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mass] | [TOKENS: 8981] |
Contents Mass Mass is an intrinsic property of a body. In modern physics, it is generally defined as the strength of an object's gravitational attraction to other bodies - as measured by an observer moving along at the same speed. It was traditionally believed to be related to the quantity of matter in a body, until the discovery of the atom and particle physics. It was found that different atoms and different elementary particles, theoretically with the same amount of matter, have nonetheless different masses. Mass in modern physics has multiple definitions which are conceptually distinct, but physically equivalent. Mass can be experimentally defined as a measure of the body's inertia, meaning the resistance to acceleration (change of velocity) when a net force is applied. The SI base unit of mass is the kilogram (kg). In physics, mass is not the same as weight, even though mass is often determined by measuring the object's weight using a spring scale, rather than balance scale comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity, but it would still have the same mass. This is because weight is a force, while mass is the property that (along with gravity) determines the strength of this force. In the Standard Model of physics, the mass of elementary particles is believed to be a result of their coupling with the Higgs boson in what is known as the Brout–Englert–Higgs mechanism. Phenomena There are several distinct phenomena that can be used to measure mass. Although some theorists have speculated that some of these phenomena could be independent of each other, current experiments have found no difference in results regardless of how it is measured: The mass of an object determines its acceleration in the presence of an applied force. The inertia and the inertial mass describe this property of physical bodies at the qualitative and quantitative level respectively. According to Newton's second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A body's mass also determines the degree to which it generates and is affected by a gravitational field. If a first body of mass mA is placed at a distance r (center of mass to center of mass) from a second body of mass mB, each body is subject to an attractive force Fg = GmAmB/r2, where G = 6.67×10−11 N⋅kg−2⋅m2 is the "universal gravitational constant". This is sometimes referred to as gravitational mass.[note 1] Repeated experiments since the 17th century have demonstrated that inertial and gravitational mass are identical; since 1915, this observation has been incorporated a priori in the equivalence principle of general relativity. Units of mass The International System of Units (SI) unit of mass is the kilogram (kg). The kilogram is 1000 grams (g), and was first defined in 1795 as the mass of one cubic decimetre of water at the melting point of ice. However, because precise measurement of a cubic decimetre of water at the specified temperature and pressure was difficult, in 1889 the kilogram was redefined as the mass of a metal object, and thus became independent of the metre and the properties of water, this being a copper prototype of the grave in 1793, the platinum Kilogramme des Archives in 1799, and the platinum–iridium International Prototype of the Kilogram (IPK) in 1889. However, the mass of the IPK and its national copies have been found to drift over time. The re-definition of the kilogram and several other units came into effect on 20 May 2019, following a final vote by the CGPM in November 2018. The new definition uses only invariant quantities of nature: the speed of light, the caesium hyperfine frequency, the Planck constant and the elementary charge. Non-SI units accepted for use with SI units include: Outside the SI system, other units of mass include: The grain was the earliest unit of mass and is the smallest unit in the apothecary, avoirdupois, Tower, and troy systems. The early unit was a grain of wheat or barleycorn used to weigh the precious metals silver and gold. Larger units preserved in stone standards were developed that were used as both units of mass and of monetary currency. The pound was derived from the mina (unit) used by ancient civilizations. A smaller unit was the shekel, and a larger unit was the talent. The magnitude of these units varied from place to place. The Babylonians and Sumerians had a system in which there were 60 shekels in a mina and 60 minas in a talent. The Roman talent consisted of 100 libra (pound) which were smaller in magnitude than the mina. The troy pound (~373.2 g) used in England and the United States for monetary purposes, like the Roman pound, was divided into 12 ounces, but the Roman uncia (ounce) was smaller. The carat is a unit for measuring gemstones that had its origin in the carob seed, which later was standardized at 1/144 ounce and then 0.2 gram. Goods of commerce were originally traded by number or volume. When weighing of goods began, units of mass based on a volume of grain or water were developed. The diverse magnitudes of units having the same name, which still appear today in our dry and liquid measures, could have arisen from the various commodities traded. The larger avoirdupois pound for goods of commerce might have been based on volume of water which has a higher bulk density than grain. The stone, quarter, hundredweight, and ton were larger units of mass used in Britain. Today only the stone continues in customary use for measuring personal body weight. The present stone is 14 pounds (~6.35 kg), but an earlier unit appears to have been 16 pounds (~7.25 kg). The other units were multiples of 2, 8, and 160 times the stone, or 28, 112, and 2240 pounds (~12.7 kg, 50.8 kg, 1016 kg), respectively. The hundredweight was approximately equal to two talents. The "long ton" is equal to 2240 pounds (1016.047 kg), the "short ton" is equal to 2000 pounds (907.18474 kg), and the tonne (or metric ton) (t) is equal to 1000 kg (or 1 megagram). Definitions In physical science, one may distinguish conceptually between at least seven different aspects of mass, or seven physical notions that involve the concept of mass. Every experiment to date has shown these seven values to be proportional, and in some cases equal, and this proportionality gives rise to the abstract concept of mass. There are a number of ways mass can be measured or operationally defined: In everyday usage, mass and "weight" are often used interchangeably. For instance, a person's weight may be stated as 75 kg. In a constant gravitational field, the weight of an object is proportional to its mass, and it is unproblematic to use the same unit for both concepts. But because of slight differences in the strength of the Earth's gravitational field at different places, the distinction becomes important for measurements with a precision better than a few percent, and for places far from the surface of the Earth, such as in space or on other planets. Conceptually, "mass" (measured in kilograms) refers to an intrinsic property of an object, whereas "weight" (measured in newtons) measures an object's resistance to deviating from its current course of free fall, which can be influenced by the nearby gravitational field. No matter how strong the gravitational field, objects in free fall are weightless, though they still have mass. The force known as "weight" is proportional to mass and acceleration in all situations where the mass is accelerated away from free fall. For example, when a body is at rest in a gravitational field (rather than in free fall), it must be accelerated by a force from a scale or the surface of a planetary body such as the Earth or the Moon. This force keeps the object from going into free fall. Weight is the opposing force in such circumstances and is thus determined by the acceleration of free fall. On the surface of the Earth, for example, an object with a mass of 50 kilograms weighs 491 newtons, which means that 491 newtons is being applied to keep the object from going into free fall. By contrast, on the surface of the Moon, the same object still has a mass of 50 kilograms but weighs only 81.5 newtons, because only 81.5 newtons is required to keep this object from going into a free fall on the moon. Restated in mathematical terms, on the surface of the Earth, the weight W of an object is related to its mass m by W = mg, where g = 9.80665 m/s2 is the acceleration due to Earth's gravitational field, (expressed as the acceleration experienced by a free-falling object). For other situations, such as when objects are subjected to mechanical accelerations from forces other than the resistance of a planetary surface, the weight force is proportional to the mass of an object multiplied by the total acceleration away from free fall, which is called the proper acceleration. Through such mechanisms, objects in elevators, vehicles, centrifuges, and the like, may experience weight forces many times those caused by resistance to the effects of gravity on objects, resulting from planetary surfaces. In such cases, the generalized equation for weight W of an object is related to its mass m by the equation W = –ma, where a is the proper acceleration of the object caused by all influences other than gravity. (Again, if gravity is the only influence, such as occurs when an object falls freely, its weight will be zero). Although inertial mass, passive gravitational mass and active gravitational mass are conceptually distinct, no experiment has ever unambiguously demonstrated any difference between them. In classical mechanics, Newton's third law implies that active and passive gravitational mass must always be identical (or at least proportional), but the classical theory offers no compelling reason why the gravitational mass has to equal the inertial mass. That it does is merely an empirical fact. Albert Einstein developed his general theory of relativity starting with the assumption that the inertial and passive gravitational masses are the same. This is known as the equivalence principle. The particular equivalence often referred to as the "Galilean equivalence principle" or the "weak equivalence principle" has the most important consequence for freely falling objects. Suppose an object has inertial and gravitational masses m and M, respectively. If the only force acting on the object comes from a gravitational field g, the force on the object is: Given this force, the acceleration of the object can be determined by Newton's second law: Putting these together, the gravitational acceleration is given by: This says that the ratio of gravitational to inertial mass of any object is equal to some constant K if and only if all objects fall at the same rate in a given gravitational field. This phenomenon is referred to as the "universality of free-fall". In addition, the constant K can be taken as 1 by defining our units appropriately. The first experiments demonstrating the universality of free-fall were—according to scientific 'folklore'—conducted by Galileo obtained by dropping objects from the Leaning Tower of Pisa. This is most likely apocryphal: he is more likely to have performed his experiments with balls rolling down nearly frictionless inclined planes to slow the motion and increase the timing accuracy. Increasingly precise experiments have been performed, such as those performed by Loránd Eötvös, using the torsion balance pendulum, in 1889. As of 2008[update], no deviation from universality, and thus from Galilean equivalence, has ever been found, at least to the precision 10−6. More precise experimental efforts are still being carried out. The universality of free-fall only applies to systems in which gravity is the only acting force. All other forces, especially friction and air resistance, must be absent or at least negligible. For example, if a hammer and a feather are dropped from the same height through the air on Earth, the feather will take much longer to reach the ground; the feather is not really in free-fall because the force of air resistance upwards against the feather is comparable to the downward force of gravity. On the other hand, if the experiment is performed in a vacuum, in which there is no air resistance, the hammer and the feather should hit the ground at exactly the same time (assuming the acceleration of both objects towards each other, and of the ground towards both objects, for its own part, is negligible). This can easily be done in a high school laboratory by dropping the objects in transparent tubes that have the air removed with a vacuum pump. It is even more dramatic when done in an environment that naturally has a vacuum, as David Scott did on the surface of the Moon during Apollo 15. A stronger version of the equivalence principle, known as the Einstein equivalence principle or the strong equivalence principle, lies at the heart of the general theory of relativity. Einstein's equivalence principle states that within sufficiently small regions of spacetime, it is impossible to distinguish between a uniform acceleration and a uniform gravitational field. Thus, the theory postulates that the force acting on a massive object caused by a gravitational field is a result of the object's tendency to move in a straight line (in other words its inertia) and should therefore be a function of its inertial mass and the strength of the gravitational field. In theoretical physics, a mass generation mechanism is a theory which attempts to explain the origin of mass from the most fundamental laws of physics. To date, a number of different models have been proposed which advocate different views of the origin of mass. The problem is complicated by the fact that the notion of mass is strongly related to the gravitational interaction but a theory of the latter has not been yet reconciled with the currently popular model of particle physics, known as the Standard Model. Pre-Newtonian concepts The concept of amount is very old and predates recorded history. The concept of "weight" would incorporate "amount" and acquire a double meaning that was not clearly recognized as such. What we now know as mass was until the time of Newton called “weight.” ... A goldsmith believed that an ounce of gold was a quantity of gold. ... But the ancients believed that a beam balance also measured “heaviness” which they recognized through their muscular senses. ... Mass and its associated downward force were believed to be the same thing. Humans, at some early era, realized that the weight of a collection of similar objects was directly proportional to the number of objects in the collection: where W is the weight of the collection of similar objects and n is the number of objects in the collection. Proportionality, by definition, implies that two values have a constant ratio: An early use of this relationship is a balance scale, which balances the force of one object's weight against the force of another object's weight. The two sides of a balance scale are close enough that the objects experience similar gravitational fields. Hence, if they have similar masses then their weights will also be similar. This allows the scale, by comparing weights, to also compare masses. Consequently, historical weight standards were often defined in terms of amounts. The Romans, for example, used the carob seed (carat or siliqua) as a measurement standard. If an object's weight was equivalent to 1728 carob seeds, then the object was said to weigh one Roman pound. If, on the other hand, the object's weight was equivalent to 144 carob seeds then the object was said to weigh one Roman ounce (uncia). The Roman pound and ounce were both defined in terms of different sized collections of the same common mass standard, the carob seed. The ratio of a Roman ounce (144 carob seeds) to a Roman pound (1728 carob seeds) was: In 1600 AD, Johannes Kepler sought employment with Tycho Brahe, who had some of the most precise astronomical data available. Using Brahe's precise observations of the planet Mars, Kepler spent the next five years developing his own method for characterizing planetary motion. In 1609, Johannes Kepler published his three laws of planetary motion, explaining how the planets orbit the Sun. In Kepler's final planetary model, he described planetary orbits as following elliptical paths with the Sun at a focal point of the ellipse. Kepler discovered that the square of the orbital period of each planet is directly proportional to the cube of the semi-major axis of its orbit, or equivalently, that the ratio of these two values is constant for all planets in the Solar System.[note 5] On 25 August 1609, Galileo Galilei demonstrated his first telescope to a group of Venetian merchants, and in early January 1610, Galileo observed four dim objects near Jupiter, which he mistook for stars. However, after a few days of observation, Galileo realized that these "stars" were in fact orbiting Jupiter. These four objects (later named the Galilean moons in honor of their discoverer) were the first celestial bodies observed to orbit something other than the Earth or Sun. Galileo continued to observe these moons over the next eighteen months, and by the middle of 1611, he had obtained remarkably accurate estimates for their periods. Sometime prior to 1638, Galileo turned his attention to the phenomenon of objects in free fall, attempting to characterize these motions. Galileo was not the first to investigate Earth's gravitational field, nor was he the first to accurately describe its fundamental characteristics. However, Galileo's reliance on scientific experimentation to establish physical principles would have a profound effect on future generations of scientists. It is unclear if these were just hypothetical experiments used to illustrate a concept, or if they were real experiments performed by Galileo, but the results obtained from these experiments were both realistic and compelling. A biography by Galileo's pupil Vincenzo Viviani stated that Galileo had dropped balls of the same material, but different masses, from the Leaning Tower of Pisa to demonstrate that their time of descent was independent of their mass.[note 6] In support of this conclusion, Galileo had advanced the following theoretical argument: He asked if two bodies of different masses and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing resolution to this question is that all bodies must fall at the same rate. A later experiment was described in Galileo's Two New Sciences published in 1638. One of Galileo's fictional characters, Salviati, describes an experiment using a bronze ball and a wooden ramp. The wooden ramp was "12 cubits long, half a cubit wide and three finger-breadths thick" with a straight, smooth, polished groove. The groove was lined with "parchment, also smooth and polished as possible". And into this groove was placed "a hard, smooth and very round bronze ball". The ramp was inclined at various angles to slow the acceleration enough so that the elapsed time could be measured. The ball was allowed to roll a known distance down the ramp, and the time taken for the ball to move the known distance was measured. The time was measured using a water clock described as follows: Galileo found that for an object in free fall, the distance that the object has fallen is always proportional to the square of the elapsed time: Galileo had shown that objects in free fall under the influence of the Earth's gravitational field have a constant acceleration, and Galileo's contemporary, Johannes Kepler, had shown that the planets follow elliptical paths under the influence of the Sun's gravitational mass. However, Galileo's free fall motions and Kepler's planetary motions remained distinct during Galileo's lifetime. According to K. M. Browne: "Kepler formed a [distinct] concept of mass ('amount of matter' (copia materiae)), but called it 'weight' as did everyone at that time." Finally, in 1686, Newton gave this distinct concept its own name. In the first paragraph of Principia, Newton defined quantity of matter as “density and bulk conjunctly”, and mass as quantity of matter. The quantity of matter is the measure of the same, arising from its density and bulk conjunctly. ... It is this quantity that I mean hereafter everywhere under the name of body or mass. And the same is known by the weight of each body; for it is proportional to the weight. — Isaac Newton, Mathematical principles of natural philosophy, Definition I. Newtonian mass Robert Hooke had published his concept of gravitational forces in 1674, stating that all celestial bodies have an attraction or gravitating power towards their own centers, and also attract all the other celestial bodies that are within the sphere of their activity. He further stated that gravitational attraction increases by how much nearer the body wrought upon is to its own center. In correspondence with Isaac Newton from 1679 and 1680, Hooke conjectured that gravitational forces might decrease according to the double of the distance between the two bodies. Hooke urged Newton, who was a pioneer in the development of calculus, to work through the mathematical details of Keplerian orbits to determine if Hooke's hypothesis was correct. Newton's own investigations verified that Hooke was correct, but due to personal differences between the two men, Newton chose not to reveal this to Hooke. Isaac Newton kept quiet about his discoveries until 1684, at which time he told a friend, Edmond Halley, that he had solved the problem of gravitational orbits, but had misplaced the solution in his office. After being encouraged by Halley, Newton decided to develop his ideas about gravity and publish all of his findings. In November 1684, Isaac Newton sent a document to Edmund Halley, now lost but presumed to have been titled De motu corporum in gyrum (Latin for "On the motion of bodies in an orbit"). Halley presented Newton's findings to the Royal Society of London, with a promise that a fuller presentation would follow. Newton later recorded his ideas in a three-book set, entitled Philosophiæ Naturalis Principia Mathematica (English: Mathematical Principles of Natural Philosophy). The first was received by the Royal Society on 28 April 1685–86; the second on 2 March 1686–87; and the third on 6 April 1686–87. The Royal Society published Newton's entire collection at their own expense in May 1686–87.: 31 Isaac Newton had bridged the gap between Kepler's gravitational mass and Galileo's gravitational acceleration, resulting in the discovery of the following relationship which governed both of these: where g is the apparent acceleration of a body as it passes through a region of space where gravitational fields exist, μ is the gravitational mass (standard gravitational parameter) of the body causing gravitational fields, and R is the radial coordinate (the distance between the centers of the two bodies). By finding the exact relationship between a body's gravitational mass and its gravitational field, Newton provided a second method for measuring gravitational mass. The mass of the Earth can be determined using Kepler's method (from the orbit of Earth's Moon), or it can be determined by measuring the gravitational acceleration on the Earth's surface, and multiplying that by the square of the Earth's radius. The mass of the Earth is approximately three-millionths of the mass of the Sun. To date, no other accurate method for measuring gravitational mass has been discovered. Newton's cannonball was a thought experiment used to bridge the gap between Galileo's gravitational acceleration and Kepler's elliptical orbits. It appeared in Newton's 1728 book A Treatise of the System of the World. According to Galileo's concept of gravitation, a dropped stone falls with constant acceleration down towards the Earth. However, Newton explains that when a stone is thrown horizontally (meaning sideways or perpendicular to Earth's gravity) it follows a curved path. "For a stone projected is by the pressure of its own weight forced out of the rectilinear path, which by the projection alone it should have pursued, and made to describe a curve line in the air; and through that crooked way is at last brought down to the ground. And the greater the velocity is with which it is projected, the farther it goes before it falls to the Earth.": 513 Newton further reasons that if an object were "projected in an horizontal direction from the top of a high mountain" with sufficient velocity, "it would reach at last quite beyond the circumference of the Earth, and return to the mountain from which it was projected." In contrast to earlier theories (e.g. celestial spheres) which stated that the heavens were made of entirely different material, Newton's theory of mass was groundbreaking partly because it introduced universal gravitational mass: every object has gravitational mass, and therefore, every object generates a gravitational field. Newton further assumed that the strength of each object's gravitational field would decrease according to the square of the distance to that object. If a large collection of small objects were formed into a giant spherical body such as the Earth or Sun, Newton calculated the collection would create a gravitational field proportional to the total mass of the body,: 397 and inversely proportional to the square of the distance to the body's center.: 221 [note 7] For example, according to Newton's theory of universal gravitation, each carob seed produces a gravitational field. Therefore, if one were to gather an immense number of carob seeds and form them into an enormous sphere, then the gravitational field of the sphere would be proportional to the number of carob seeds in the sphere. Hence, it should be theoretically possible to determine the exact number of carob seeds that would be required to produce a gravitational field similar to that of the Earth or Sun. In fact, by unit conversion it is a simple matter of abstraction to realize that any traditional mass unit can theoretically be used to measure gravitational mass. Measuring gravitational mass in terms of traditional mass units is simple in principle, but extremely difficult in practice. According to Newton's theory, all objects produce gravitational fields and it is theoretically possible to collect an immense number of small objects and form them into an enormous gravitating sphere. However, from a practical standpoint, the gravitational fields of small objects are extremely weak and difficult to measure. Newton's books on universal gravitation were published in the 1680s, but the first successful measurement of the Earth's mass in terms of traditional mass units, the Cavendish experiment, did not occur until 1797, over a hundred years later. Henry Cavendish found that the Earth's density was 5.448 ± 0.033 times that of water. As of 2025, the Earth's mass is only known to around five significant figures, whereas GM🜨, the product of Earth's mass and the universal gravitational constant, is known to over nine significant figures. Given two objects A and B, of masses MA and MB, separated by a displacement RAB, Newton's law of gravitation states that each object exerts a gravitational force on the other, of magnitude where G is the universal gravitational constant. The above statement may be reformulated in the following way: if g is the magnitude at a given location in a gravitational field, then the gravitational force on an object with gravitational mass M is This is the basis by which masses are determined by weighing. In simple spring scales, for example, the force F is proportional to the displacement of the spring beneath the weighing pan, as per Hooke's law, and the scales are calibrated to take g into account, allowing the mass M to be read off. Assuming the gravitational field is equivalent on both sides of the balance, a balance measures relative weight, giving the relative gravitation mass of each object. Mass was traditionally believed to be a measure of the quantity of matter in a physical body, equal to the "amount of matter" in an object. For example, Barre´ de Saint-Venant argued in 1851 that every object contains a number of "points" (basically, interchangeable elementary particles), and that mass is proportional to the number of points the object contains. (In practice, this "amount of matter" definition is adequate for most of classical mechanics, and sometimes remains in use in basic education, if the priority is to teach the difference between mass from weight.) This traditional "amount of matter" belief was contradicted by the fact that different atoms (and, later, different elementary particles) can have different masses, and was further contradicted by Einstein's theory of relativity (1905), which showed that the measurable mass of an object increases when energy is added to it (for example, by increasing its temperature or forcing it near an object that electrically repels it.) This motivates a search for a different definition of mass that is more accurate than the traditional definition of "the amount of matter in an object". Inertial mass is the mass of an object measured by its resistance to acceleration. This definition has been championed by Ernst Mach and has since been developed into the notion of operationalism by Percy W. Bridgman. The simple classical mechanics definition of mass differs slightly from the definition in the theory of special relativity, but the essential meaning is the same. In classical mechanics, according to Newton's second law, we say that a body has a mass m if, at any instant of time, it obeys the equation of motion where F is the resultant force acting on the body and a is the acceleration of the body's centre of mass.[note 8] For the moment, we will put aside the question of what "force acting on the body" actually means. This equation illustrates how mass relates to the inertia of a body. Consider two objects with different masses. If we apply an identical force to each, the object with a bigger mass will experience a smaller acceleration, and the object with a smaller mass will experience a bigger acceleration. We might say that the larger mass exerts a greater "resistance" to changing its state of motion in response to the force. However, this notion of applying "identical" forces to different objects brings us back to the fact that we have not really defined what a force is. We can sidestep this difficulty with the help of Newton's third law, which states that if one object exerts a force on a second object, it will experience an equal and opposite force. To be precise, suppose we have two objects of constant inertial masses m1 and m2. We isolate the two objects from all other physical influences, so that the only forces present are the force exerted on m1 by m2, which we denote F12, and the force exerted on m2 by m1, which we denote F21. Newton's second law states that where a1 and a2 are the accelerations of m1 and m2, respectively. Suppose that these accelerations are non-zero, so that the forces between the two objects are non-zero. This occurs, for example, if the two objects are in the process of colliding with one another. Newton's third law then states that and thus If |a1| is non-zero, the fraction is well-defined, which allows us to measure the inertial mass of m1. In this case, m2 is our "reference" object, and we can define its mass m as (say) 1 kilogram. Then we can measure the mass of any other object in the universe by colliding it with the reference object and measuring the accelerations. Additionally, mass relates a body's momentum p to its linear velocity v: and the body's kinetic energy K to its velocity: The primary difficulty with Mach's definition of mass is that it fails to take into account the potential energy (or binding energy) needed to bring two masses sufficiently close to one another to perform the measurement of mass. This is most vividly demonstrated by comparing the mass of the proton in the nucleus of deuterium, to the mass of the proton in free space (which is greater by about 0.239%—this is due to the binding energy of deuterium). Thus, for example, if the reference weight m2 is taken to be the mass of the neutron in free space, and the relative accelerations for the proton and neutron in deuterium are computed, then the above formula over-estimates the mass m1 (by 0.239%) for the proton in deuterium. At best, Mach's formula can only be used to obtain ratios of masses, that is, as m1 / m2 = |a2| / |a1|. An additional difficulty was pointed out by Henri Poincaré, which is that the measurement of instantaneous acceleration is impossible: unlike the measurement of time or distance, there is no way to measure acceleration with a single measurement; one must make multiple measurements (of position, time, etc.) and perform a computation to obtain the acceleration. Poincaré termed this to be an "insurmountable flaw" in the Mach definition of mass. Atomic masses Typically, the mass of objects is measured in terms of the kilogram, which since 2019 is defined in terms of fundamental constants of nature. The mass of an atom or other particle can be compared more precisely and more conveniently to that of another atom, and thus scientists developed the dalton (also known as the unified atomic mass unit). By definition, 1 Da (one dalton) is exactly one-twelfth of the mass of a carbon-12 atom, and thus, a carbon-12 atom has a mass of exactly 12 Da. In relativity In some frameworks of special relativity, physicists have used different definitions of the term. In these frameworks, two kinds of mass are defined: rest mass (invariant mass),[note 9] and relativistic mass (which increases with velocity). Rest mass is the Newtonian mass as measured by an observer moving along with the object. Relativistic mass is the total quantity of energy in a body or system divided by c2. The two are related by the following equation: where γ {\displaystyle \gamma } is the Lorentz factor: The invariant mass of systems is the same for observers in all inertial frames, while the relativistic mass depends on the observer's frame of reference. In order to formulate the equations of physics such that mass values do not change between observers, it is convenient to use rest mass. The rest mass of a body is also related to its energy E and the magnitude of its momentum p by the relativistic energy-momentum equation: So long as the system is closed with respect to mass and energy, both kinds of mass are conserved in any given frame of reference. The conservation of mass holds even as some types of particles are converted to others. Matter particles (such as atoms) may be converted to non-matter particles (such as photons of light), but this does not affect the total amount of mass or energy. Although things like heat may not be matter, all types of energy still continue to exhibit mass.[note 10] Thus, mass and energy do not change into one another in relativity; rather, both are names for the same thing, and neither mass nor energy appear without the other. Both rest and relativistic mass can be expressed as an energy by applying the well-known relationship E = mc2, yielding rest energy and "relativistic energy" (total system energy) respectively: The "relativistic" mass and energy concepts are related to their "rest" counterparts, but they do not have the same value as their rest counterparts in systems where there is a net momentum. Because the relativistic mass is proportional to the energy, it has gradually fallen into disuse among physicists. There is disagreement over whether the concept remains useful pedagogically. In bound systems, the binding energy must often be subtracted from the mass of the unbound system, because binding energy commonly leaves the system at the time it is bound. The mass of the system changes in this process merely because the system was not closed during the binding process, so the energy escaped. For example, the binding energy of atomic nuclei is often lost in the form of gamma rays when the nuclei are formed, leaving nuclides which have less mass than the free particles (nucleons) of which they are composed. Mass–energy equivalence also holds in macroscopic systems. For example, if one takes exactly one kilogram of ice, and applies heat, the mass of the resulting melt-water will be more than a kilogram: it will include the mass from the thermal energy (latent heat) used to melt the ice; this follows from the conservation of energy. This number is small but not negligible: about 3.7 nanograms. It is given by the latent heat of melting ice (334 kJ/kg) divided by the speed of light squared (c2 ≈ 9×1016 m2/s2). In general relativity, the equivalence principle is the equivalence of gravitational and inertial mass. At the core of this assertion is Albert Einstein's idea that the gravitational force as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (i.e. accelerated) frame of reference. However, it turns out that it is impossible to find an objective general definition for the concept of invariant mass in general relativity. At the core of the problem is the non-linearity of the Einstein field equations, making it impossible to write the gravitational field energy as part of the stress–energy tensor in a way that is invariant for all observers. For a given observer, this can be achieved by the stress–energy–momentum pseudotensor. In quantum physics In classical mechanics, the inert mass of a particle appears in the Euler–Lagrange equation as a parameter m: After quantization, replacing the position vector x with a wave function, the parameter m appears in the kinetic energy operator: In the ostensibly covariant (relativistically invariant) Dirac equation, and in natural units, this becomes: where the "mass" parameter m is now simply a constant associated with the quantum described by the wave function ψ. In the Standard Model of particle physics as developed in the 1960s, this term arises from the coupling of the field ψ to an additional field Φ, the Higgs field. In the case of fermions, the Higgs mechanism results in the replacement of the term mψ in the Lagrangian with G ψ ψ ¯ ϕ ψ {\displaystyle G_{\psi }{\overline {\psi }}\phi \psi } . This shifts the explanandum of the value for the mass of each elementary particle to the value of the unknown coupling constant Gψ. A tachyonic field, or simply tachyon, is a quantum field with an imaginary mass. Although tachyons (particles that move faster than light) are a purely hypothetical concept not generally believed to exist, fields with imaginary mass have come to play an important role in modern physics and are discussed in popular books on physics. Under no circumstances do any excitations ever propagate faster than light in such theories—the presence or absence of a tachyonic mass has no effect whatsoever on the maximum velocity of signals (there is no violation of causality). While the field may have imaginary mass, any physical particles do not; the "imaginary mass" shows that the system becomes unstable, and sheds the instability by undergoing a type of phase transition called tachyon condensation (closely related to second order phase transitions) that results in symmetry breaking in current models of particle physics. The term "tachyon" was coined by Gerald Feinberg in a 1967 paper, but it was soon realized that Feinberg's model in fact did not allow for superluminal speeds. Instead, the imaginary mass creates an instability in the configuration:- any configuration in which one or more field excitations are tachyonic will spontaneously decay, and the resulting configuration contains no physical tachyons. This process is known as tachyon condensation. Well known examples include the condensation of the Higgs boson in particle physics, and ferromagnetism in condensed matter physics. Although the notion of a tachyonic imaginary mass might seem troubling because there is no classical interpretation of an imaginary mass, the mass is not quantized. Rather, the scalar field is; even for tachyonic quantum fields, the field operators at spacelike separated points still commute (or anticommute), thus preserving causality. Therefore, information still does not propagate faster than light, and solutions grow exponentially, but not superluminally (there is no violation of causality). Tachyon condensation drives a physical system that has reached a local limit and might naively be expected to produce physical tachyons, to an alternate stable state where no physical tachyons exist. Once the tachyonic field reaches the minimum of the potential, its quanta are not tachyons any more but rather are ordinary particles with a positive mass-squared. This is a special case of the general rule, where unstable massive particles are formally described as having a complex mass, with the real part being their mass in the usual sense, and the imaginary part being the decay rate in natural units. However, in quantum field theory, a particle (a "one-particle state") is roughly defined as a state which is constant over time; i.e., an eigenvalue of the Hamiltonian. An unstable particle is a state which is only approximately constant over time; If it exists long enough to be measured, it can be formally described as having a complex mass, with the real part of the mass greater than its imaginary part. If both parts are of the same magnitude, this is interpreted as a resonance appearing in a scattering process rather than a particle, as it is considered not to exist long enough to be measured independently of the scattering process. In the case of a tachyon, the real part of the mass is zero, and hence no concept of a particle can be attributed to it. In a Lorentz invariant theory, the same formulas that apply to ordinary slower-than-light particles (sometimes called "bradyons" in discussions of tachyons) must also apply to tachyons. In particular the energy–momentum relation: (where p is the relativistic momentum of the bradyon and m is its rest mass) should still apply, along with the formula for the total energy of a particle: This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the "rest mass–energy") and a contribution from its motion, the kinetic energy. When v is larger than c, the denominator in the equation for the energy is "imaginary", as the value under the radical is negative. Because the total energy must be real, the numerator must also be imaginary: i.e. the rest mass m must be imaginary, as a pure imaginary number divided by another pure imaginary number is a real number. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Raspberry_Pi] | [TOKENS: 6008] |
Contents Raspberry Pi Raspberry Pi (/paɪ/ PY) is a series of small single-board computers (SBCs) originally developed in the United Kingdom by the Raspberry Pi Foundation in collaboration with Broadcom. To commercialize the product and support its growing demand, the Foundation established a commercial entity, now known as Raspberry Pi Holdings. The Raspberry Pi was originally created to help teach computer science in schools, but gained popularity for many other uses due to its low cost, compact size, and flexibility. It is now used in areas such as industrial automation, robotics, home automation, IoT devices, and hobbyist projects. The company's products range from simple microcontrollers to computers that the company markets as being powerful enough to be used as a general purpose PC. Computers are built around a custom designed system on a chip and offer features such as HDMI video/audio output, USB ports, wireless networking, GPIO pins, and up to 16 GB of RAM. Storage is typically provided via microSD cards. In 2015, the Raspberry Pi surpassed the ZX Spectrum as the best-selling British computer of all time. As of March 2025[update], 68 million units had been sold. History The Raspberry Pi Foundation was established in 2008 by a group including Eben Upton, in response to a noticeable decline in both the number and skill level of students applying to study computer science at the University of Cambridge Computer Laboratory. The foundation's goal was to create a low-cost computer to help rekindle interest in programming among schoolchildren. This mission was inspired by the aims of the BBC Micro computer of the early 1980s, which was developed by Acorn Computers as part of a BBC initiative to promote computer literacy in UK schools. The names "Model A" and "Model B" were chosen as a deliberate homage to the BBC Micro. The name "Raspberry Pi" combines the fruit-themed naming convention used by early computer companies with a nod to the Python programming language. The first prototypes resembled small USB sticks. By August 2011, fifty functionally complete "alpha" boards were produced for testing, with demonstrations showing them running a Debian-based desktop and handling 1080p video playback. In late 2011, twenty-five "beta" boards were finalized, and to generate publicity before the official launch, ten of these were auctioned on eBay in early 2012. The first commercial Raspberry Pi, the Model B, was launched on 29 February 2012, with an initial price of $35. Demand far exceeded expectations, causing the websites of the two initial licensed distributors, Premier Farnell and RS Components, to crash from high traffic. Initial batches sold out almost immediately, with one distributor reporting over 100,000 pre-orders on the first day. The lower-cost $25 Model A followed on 4 February 2013. The Raspberry Pi did not ship with a pre-installed operating system. While ports of RISC OS 5 and Fedora Linux were available, a port of Debian called Raspbian quickly became the standard. Released in July 2012, it was optimized to leverage the Raspberry Pi's floating-point unit, offering significant performance gains. Raspberry Pi quickly endorsed it as the official recommended OS, and by September 2013, the company assumed leadership of Raspbian's development. In 2012, the Foundation restructured, creating Raspberry Pi (Trading) Ltd. to handle engineering and commercial activities, with Eben Upton as its CEO. This allowed the Raspberry Pi Foundation to focus solely on its charitable and educational mission. Raspberry Pi (Trading) Ltd. was renamed Raspberry Pi Ltd. in 2021. In June 2024, the company went public on the London Stock Exchange under the ticker symbol RPI, becoming Raspberry Pi Holdings. Following the launch, the first units reached buyers in April 2012. To address overwhelming demand and initial supply chain issues, the Foundation ramped up production to 4,000 units per day by July. The first batch of 10,000 boards was produced in factories located in Taiwan and China. A significant strategic shift occurred in September 2012, when manufacturing began moving to a Sony factory in Pencoed, Wales. During this period, the hardware was also refined: the Model B Revision 2.0 board was announced with minor corrections, and in October, its included RAM was doubled to 512 MB. The post-launch period focused heavily on software and ecosystem development. In August 2012, the Foundation enabled hardware-accelerated H.264 video encoding and began selling licenses for MPEG-2 and VC-1 codecs. A major milestone for the open-source community occurred in October 2012, when the Foundation released the Videocore IV graphics driver as free software. While the claim of it being the first fully open-source ARM SoC driver was debated, the move was widely praised. This effort culminated in February 2014 with the release of full documentation for the graphics core and a complete source release of the graphics stack under a 3-clause BSD license. In 2014, the Raspberry Pi product line began to diversify. April saw the release of the Compute Module, a miniature Raspberry Pi in a small form factor designed for industrial and embedded applications, which would soon become the largest market for the computers. In July the Model B+ was released with a refined design featuring additional USB ports and a more efficient board layout that established the form factor for future models. A smaller, cheaper ($20) Model A+ was released in November. A significant leap in performance came in February 2015 with the Raspberry Pi 2, which featured a 900 MHz quad-core CPU and 1 GB of RAM. Following its release, the price of the Model B+ was lowered to $25, a move some observers linked to the emergence of lower-priced competitors. The Raspberry Pi Zero, launched in November 2015, radically redefined the entry point for computing at a price of just $5. In February 2016, the Raspberry Pi 3 marked another major milestone by integrating a 64-bit processor, Wi-Fi, and Bluetooth. The product line continued to expand with the wireless-enabled Raspberry Pi Zero W (February 2017), the faster Raspberry Pi 3B+ (March 2018), Raspberry Pi 3A+ (November 2018), and Compute Module 3+ (January 2019). The Raspberry Pi 4, launched in June 2019, represented another major performance leap with a faster processor, up to 8 GB of RAM, dual-monitor support, and USB 3.0 ports. A compute module version (CM4) launched in October 2020. This era saw further diversification with the Raspberry Pi 400 (a computer integrated into a keyboard) in November 2020, and the Raspberry Pi Pico in January 2021. The Pico, based on the in-house designed RP2040 chip, marked the company's first entry into the low-cost microcontroller market. The Raspberry Pi Zero 2 W, introduced in 2021, featured a faster processor, providing a significant performance boost while maintaining the low-cost, compact form factor. The global chip shortage starting in 2020, as well as an uptake in demand starting in early 2021, notably affected the Raspberry Pi, causing significant availability issues from that time onward. The company explained its approach to the shortages in 2021, and April 2022, explaining that it was prioritising business and industrial customers. The Raspberry Pi 5 was released in October 2023, featuring an upgraded CPU and GPU, up to 16 GB of RAM, a PCIe interface for fast peripherals and an in-house designed southbridge chip. Updated versions of the Compute Module (CM5) and keyboard computer (Pi 500, Pi 500+) based on the Pi 5's architecture were subsequently announced. The Raspberry Pi Pico 2, released in 2024, introduced the RP2350 microcontroller, featuring selectable dual-core 32-bit ARM Cortex-M33 or RISC-V processors, 520 KB of RAM, and 4 MB of flash memory. The Raspberry Pi's sales demonstrated remarkable growth. The one-millionth Pi was sold by October 2013, a figure that doubled just a month later. By February 2016, sales reached eight million units, surpassing the ZX Spectrum as the best-selling British computer of all time. Sales hit ten million in September 2016, thirty million by December 2019, and forty million by May 2021. As of its tenth anniversary in February 2022, a total of 46 million Raspberry Pis had been sold. As of March 2025[update], 68 million units had been sold. Series and generations There are five main series of Raspberry Pi computers, each with multiple generations. Most models feature a Broadcom system on a chip (SoC) with an integrated ARM-based central processing unit (CPU) and an on-chip graphics processing unit (GPU). The exception is the Pico series, a microcontroller which uses the RP2040, a custom-designed SoC with an ARM-compatible CPU but no GPU. The flagship Raspberry Pi series, often referred to simply as "Raspberry Pi", offers high-performance hardware, a full Linux operating system, and a variety of common ports in a compact form factor roughly the size of a credit card. The Keyboard series combines Raspberry Pi hardware and ports into a keyboard computer form factor, providing a self-contained Linux-based desktop system. The Raspberry Pi Zero is a series of compact, low-cost, and low-power single-board computers that provide basic functionality and Linux compatibility for embedded and minimalist computing applications. The Pico is a series of compact microcontroller boards based on Raspberry Pi-designed chips. Unlike other models, they do not run Linux or support removable storage, and are instead programmed by flashing binaries to onboard flash memory. The Compute Module (CM) series delivers Raspberry Pi's flagship hardware in a compact form for industrial and embedded applications, omitting onboard ports and GPIO headers in favour of a carrier board interface. Compute Modules are offered in one of two formats: a board matching the physical dimensions of a DDR2 SO-DIMM RAM module (though electrically incompatible with standard SO-DIMM sockets) and a smaller board with dual 100-pin high-density connectors that enables additional interfaces. 1W 2017 10 Notes Hardware Since its introduction, Raspberry Pi hardware has been designed to provide low-cost computing platforms. The founders intended it to be an affordable and accessible system by making it compatible with widely available second-hand peripherals, such as televisions for displays, USB input devices, and cellphone chargers for power. Over time, the hardware has expanded to support both advanced configurations and ultra-low-cost variants. The company has also committed to keeping products in production for up to ten years. The Raspberry Pi has undergone multiple hardware revisions, with changes in processor type, memory capacity, networking features, and peripheral support. All models include a processor, memory, and various input/output interfaces on a single circuit board. Most include an HDMI output, USB ports, and a GPIO (general-purpose input/output) header. Networking capabilities vary by model, with later versions featuring integrated Wi-Fi and Bluetooth. Storage is typically provided via a microSD card, with newer models supporting USB or PCIe-based boot options. Raspberry Pi models use a range of system on a chip (SoC) designs, developed in partnership with Arm and Broadcom. Each generation has introduced improvements in CPU architecture, clock speed, graphics, and overall performance. The original Raspberry Pi and the Pi Zero use the Broadcom BCM2835, featuring a single-core 32-bit ARM11 CPU and a VideoCore IV GPU. The CPU is clocked at 700 MHz on the original Pi and 1 GHz on the Zero and Zero W. The Raspberry Pi 2 introduced the BCM2836 with a 900 MHz quad-core 32-bit Cortex-A7 CPU, while later revisions used the 64-bit BCM2837 with Cortex-A53 cores. The Raspberry Pi 3 retained the BCM2837, increasing the CPU clock to 1.2–1.4 GHz depending on the model. The Pi Zero 2 uses the RP3A0, a system in a package (SiP) combining the quad-core Cortex-A53 processor clocked at 1 GHz with 512 MB of RAM. The Raspberry Pi 4 introduced the BCM2711, a 64-bit SoC with a quad-core Cortex-A72 CPU and VideoCore VI GPU. Clock speeds were initially 1.5 GHz and later increased to 1.8 GHz. The Raspberry Pi 5 uses the BCM2712, featuring a quad-core Cortex-A76 CPU at 2.4 GHz, an 800 MHz VideoCore VII GPU, and a separate RP1 southbridge chip designed in-house. Raspberry Pi has also developed its own chips outside of its partnership with Broadcom. The Raspberry Pi Pico uses the RP2040, featuring dual-core 32-bit Cortex-M0+ processors running at 133 MHz and 264 kB of on-chip RAM. The Pico 2 uses the RP2350, which can operate with either dual-core Cortex-M33 or dual-core Hazard3 RISC-V CPUs selected at boot, running at 150 MHz, with 520 kB of RAM. Most Raspberry Pi models support user-configurable overclocking through the system configuration file. More recent models feature dynamic frequency scaling, adjusting CPU speed based on workload to balance performance and thermal output. This behavior, while similar to overclocking, is part of the default power management system. If the CPU temperature exceeds 85 °C (185 °F) or if undervoltage is detected, performance is throttled automatically. For sustained high-performance workloads, additional cooling—such as a heat sink or fan—may be required. The original Raspberry Pi Model B was equipped with 512 MB of random-access memory (RAM), which, like later models, shares memory between the CPU and GPU. All Raspberry Pi boards support dynamic memory allocation between these components, allowing the system to adjust the division based on workload or user configuration. The original Model A included 256 MB of RAM. Subsequent models introduced increased memory capacities. The Pi 2B and 3 B/B+ models feature 1 GB of RAM, while the smaller 1A+ and 3A+ models have 512 MB. The Pi Zero and Zero 2 W also include 512 MB. The Pi 4 is available with 1, 2, 4, or 8 GB of RAM, and the Pi 5 expands this further with options for 2, 4, 8, or 16 GB, the highest capacity offered to date. Storage is typically provided via a microSD card, though some Compute Modules offer onboard eMMC flash. Newer models support USB booting, and the Pi 5 includes support for NVMe SSDs over PCIe. Boards also include USB ports for peripherals such as keyboards, mice, and storage devices. Raspberry Pi devices support both digital and analog video output across various resolutions. Early models featured a full-size HDMI port and an RCA connector for analog composite video output. Later boards removed the RCA jack but retained analog output via the 3.5 mm TRRS jack or dedicated solder points. According to the Raspberry Pi Foundation, analog support helps maintain accessibility in developing countries. To accommodate the addition of features on the compact boards, video connectors have shrunk across models. The Pi Zero series uses a mini-HDMI connector, while the Pi 4 and 5 use dual micro-HDMI ports. This change enables support for multiple displays: the Pi 4 can drive two 4K displays at 30 Hz or one at 60 Hz, while the Pi 5 improves on this with support for two 4K displays at 60 Hz. Older Raspberry Pi models support common display resolutions such as 720p and 1080p by default, with some capable of higher resolutions depending on hardware and configuration. In some cases, older hardware can output in 4K, though performance may be poor. Most Raspberry Pi models include a 40-pin connector known as the GPIO (general-purpose input/output) header, although only some of the pins are dedicated to GPIO functions. The header, designated as J8, uses a consistent pinout across models.[citation needed] The header supplies 3.3 V and 5 V power along with various multiplexed, low-speed interfaces, including UART, SPI, I²C, I²S, and PCM. GPIO pins can be configured as either inputs or outputs. When set as an output, a pin can drive a high (3.3 V) or low (0 V) signal. When configured as an input, it can read a high (3.3 V) or low (0 V) voltage level. The original Raspberry Pi 1 Model A and B include only the first 26 pins of this header. On some Pi Zero models, the header is unpopulated, but solderable through-holes are provided. The Pico models feature a unique layout with unpopulated through-holes and a castellated edge, allowing it to be surface-mounted as a module. Compute Module boards do not include GPIO headers but instead expose GPIO signals through their board connectors.[citation needed] Networking capabilities differ by model. The Model B and B+ include an Ethernet port. Starting with the Raspberry Pi 3, most models come with built-in WiFi and Bluetooth. The Raspberry Pi 3B+ adds faster Ethernet and dual-band WiFi. The Raspberry Pi 4 and 5 offer full gigabit Ethernet. The "A" models and the Pi Zero series do not have Ethernet ports, and built-in wireless support is optional. A USB adapter may be used for wired or wireless connections. Headless Raspberry Pi configurations may experience intermittent network connectivity issues, often attributed to default WiFi power management settings. These issues are typically addressable through configuration changes.[citation needed] Some Raspberry Pi models, like the Zero, 1A, 3A+, and 4, can act like a USB device (via the USB On-The-Go protocol) when plugged into another computer. This lets them work as gadgets such as a virtual keyboard, network adapter, or serial device. Many newer models can also start up (or "boot") directly from a USB drive, without needing a microSD card. This feature is not available on older models like the original Raspberry Pi, Pi Zero, or early versions of the Pi 2. Most Raspberry Pi models do not include a built-in real-time clock, which means they rely on an internet connection to set the correct time with the Network Time Protocol when they start up. If there is no connection, the time must be set manually; otherwise, the system assumes no time has passed since it was last used. Add-on clock modules are available for situations where accurate timekeeping is needed without internet access. The Raspberry Pi 5 is the first model to include a built-in clock which uses a battery to keep time when powered off. Specifications 1.6 A (8 W) for "power virus" workloads 3 A (15 W) power supply recommended. Software The recommended operating system is Raspberry Pi OS, a Debian-based Linux distribution optimized for Raspberry Pi hardware and tuned to have low base memory requirements. It is available in both 32-bit and 64-bit versions and comes in several editions: a standard edition, a "Lite" version without a desktop environment, and a "Full" version that includes a comprehensive suite of software. Raspberry Pi OS can be purchased pre-installed on a microSD card, or downloaded and installed using Raspberry Pi Imager, a utility introduced in March 2020 to simplify the installation of operating systems onto SD cards and other media for Raspberry Pi devices. Available for macOS, Raspberry Pi OS, Ubuntu, and Windows, Imager allows users to download and write operating system disk images within a single application. In addition to Raspberry Pi OS, the utility supports a variety of third-party operating systems, including Alpine Linux, Armbian, Emteria.OS (Android based), FreedomBox, Kali Linux, LibreELEC, RetroPie, RISC OS, SatNOGS, and Ubuntu. The Raspberry Pi uses official firmware that is proprietary, meaning its source code is not publicly available, but the binary blob can be freely redistributed. An experimental open-source alternative to the official firmware is also available. Although limited in functionality, it demonstrates that it is possible to start the Raspberry Pi's ARM processor cores and boot a basic version of the Linux kernel without relying on the proprietary components. This is significant for developers and advocates who aim to build fully open systems. Raspberry Pi systems use Broadcom's VideoCore GPU, which requires a proprietary binary blob to be loaded at boot. Initially, the supporting software stack was entirely proprietary, though parts of the code were later released. Most driver functionality remains within closed-source GPU firmware, accessed via runtime libraries such as OpenMAX IL, OpenGL ES, and OpenVG. These libraries interface with a kernel-space open-source driver, which in turn communicates with the closed GPU firmware. Applications use OpenMAX IL for video, OpenGL ES for 3D graphics, and OpenVG for 2D graphics, with all graphics libraries making use of EGL. In February 2020, Raspberry Pi announced the development of a Vulkan graphics driver. A working prototype demonstrated high performance in Quake III Arena on a Raspberry Pi 3B+ later that year. On 24 November 2020, Raspberry Pi 4's Vulkan driver was declared Vulkan 1.0 conformant, with subsequent conformance updates for versions 1.1 and 1.2. Official accessories Raspberry Pi offers several official camera modules that connect via the Camera Serial Interface. These modules are used for photography, video capture, and machine vision applications. Raspberry Pi also offers official display peripherals for graphical and touchscreen interfaces: Official Raspberry Pi HATs (Hardware Attached on Top) and expansion boards extend the functionality of Raspberry Pi computers. The HAT standard was introduced in July 2014. Many boards use an EEPROM for automatic configuration. Reception and use Technology writer Glyn Moody described the project in May 2011 as a "potential BBC Micro 2.0", not by replacing PC compatible machines but by supplementing them. In March 2012 Stephen Pritchard echoed the BBC Micro successor sentiment in ITPRO. Alex Hope, co-author of the Next Gen report, is hopeful that the computer will engage children with the excitement of programming. Co-author Ian Livingstone suggested that the BBC could be involved in building support for the device, possibly branding it as the BBC Nano. The Centre for Computing History strongly supports the Raspberry Pi project, feeling that it could "usher in a new era". Before release, the board was showcased by ARM's CEO Warren East at an event in Cambridge outlining Google's ideas to improve UK science and technology education. Harry Fairhead, however, suggests that more emphasis should be put on improving the educational software available on existing hardware, using tools such as MIT App Inventor to return programming to schools, rather than adding new hardware choices. Simon Rockman, writing in a ZDNet blog, was of the opinion that teens will have "better things to do", despite what happened in the 1980s. In October 2012, the Raspberry Pi won T3's Innovation of the Year award, and futurist Mark Pesce cited a (borrowed) Raspberry Pi as the inspiration for his ambient device project MooresCloud. In October 2012, the British Computer Society responded to the announcement of enhanced specifications by stating, "it's definitely something we'll want to sink our teeth into." In June 2017, Raspberry Pi won the Royal Academy of Engineering MacRobert Award. The citation for the award to the Raspberry Pi said it was "for its inexpensive credit card-sized microcomputers, which are redefining how people engage with computing, inspiring students to learn coding and computer science and providing innovative control solutions for industry." Clusters of hundreds of Raspberry Pis have been used for testing programs destined for supercomputers. The Raspberry Pi community was described by Jamie Ayre of FOSS software company AdaCore as one of the most exciting parts of the project. Community blogger Russell Davis said that the community strength allows the Foundation to concentrate on documentation and teaching. The community developed a fanzine around the platform called The MagPi which in 2015, was handed over to Raspberry Pi (Trading) Ltd by its volunteers to be continued in-house. A series of community Raspberry Jam events have been held across the UK and around the world. As of January 2012[update], enquiries about the board in the United Kingdom have been received from schools in both the state and private sectors, with around five times as much interest from the latter. It is hoped that businesses will sponsor purchases for less advantaged schools. The CEO of Premier Farnell said that the government of a country in the Middle East has expressed interest in providing a board to every schoolgirl, to enhance her employment prospects. In 2014, the Raspberry Pi Foundation hired a number of its community members including ex-teachers and software developers to launch a set of free learning resources for its website. The Foundation also started a teacher training course called Picademy with the aim of helping teachers prepare for teaching the new computing curriculum using the Raspberry Pi in the classroom. In 2018, NASA launched the JPL Open Source Rover Project, which is a scaled down version of Curiosity rover and uses a Raspberry Pi as the control module, to encourage students and hobbyists to get involved in mechanical, software, electronics, and robotics engineering. There are a number of developers and applications that are using the Raspberry Pi for home automation. These programmers are making an effort to modify the Raspberry Pi into a cost-affordable solution in energy monitoring and power consumption. Because of the relatively low cost of the Raspberry Pi, this has become a popular and economical alternative to the more expensive commercial solutions.[citation needed] In June 2014, Polish industrial automation manufacturer TECHBASE released ModBerry, an industrial computer based on the Raspberry Pi Compute Module. The device has a number of interfaces, most notably RS-485/232 serial ports, digital and analogue inputs/outputs, CAN and economical 1-Wire buses, all of which are widely used in the automation industry. The design allows the use of the Compute Module in harsh industrial environments, leading to the conclusion that the Raspberry Pi is no longer limited to home and science projects, but can be widely used as an Industrial IoT solution and achieve goals of Industry 4.0. In March 2018, SUSE announced commercial support for SUSE Linux Enterprise on the Raspberry Pi 3 Model B to support a number of undisclosed customers implementing industrial monitoring with the Raspberry Pi. In January 2021, TECHBASE announced a Raspberry Pi Compute Module 4 cluster for AI accelerator, routing and file server use. The device contains one or more standard Raspberry Pi Compute Module 4s in an industrial DIN rail housing, with some versions containing one or more Coral Edge tensor processing units. The Organelle is a portable synthesiser, a sampler, a sequencer, and an effects processor designed and assembled by Critter & Guitari. It incorporates a Raspberry Pi computer module running Linux. OTTO is a digital camera created by Next Thing Co. It incorporates a Raspberry Pi Compute Module. It was successfully crowd-funded in a May 2014 Kickstarter campaign. Slice is a digital media player which also uses a Compute Module as its heart. It was crowd-funded in an August 2014 Kickstarter campaign. The software running on Slice is based on Kodi. Numerous commercial thin client computer terminals use the Raspberry Pi. During the COVID-19 pandemic, demand increased primarily due to the increase in remote work, but also because of the use of many Raspberry Pi Zeros in ventilators for COVID-19 patients in countries such as Colombia, which were used to combat strain on the healthcare system. In March 2020, Raspberry Pi sales reached 640,000 units, the second largest month of sales in the company's history. A project was launched in December 2014 at an event held by the UK Space Agency. The Astro Pi was an augmented Raspberry Pi that included a sensor hat with a visible light or infrared camera. The Astro Pi competition, called Principia, was officially opened in January and was opened to all primary and secondary school aged children who were residents of the United Kingdom. During his mission, British ESA astronaut Tim Peake deployed the computers on board the International Space Station. He loaded the winning code while in orbit, collected the data generated and then sent this to Earth where it was distributed to the winning teams. Covered themes during the competition included spacecraft sensors, satellite imaging, space measurements, data fusion and space radiation. The organisations involved in the Astro Pi competition include the UK Space Agency, UKspace, Raspberry Pi, ESERO-UK and ESA. In 2017, the European Space Agency ran another competition open to all students in the European Union called Proxima. The winning programs were run on the ISS by Thomas Pesquet, a French astronaut. In December 2021, the Dragon 2 spacecraft launched by NASA had a pair of Astro Pi in it. See also References Further reading External links |
======================================== |
[SOURCE: https://www.fast.ai/posts/2021-10-27-vaccine-safety.html] | [TOKENS: 6310] |
SARS-CoV-2 Spike Protein Impairment of Endothelial Function Does Not Impact Vaccine Safety Jeremy Howard and Uri Manor October 27, 2021 On this page My colleague Dr Uri Manor was a senior author on a study in March this year which has become the most discussed paper in the history of Circulation Research and is in the top 0.005% of discussed papers across all topics. That’s because it got widely picked up by anti-vaxx groups that totally misunderstood what it says. Uri and I decided to set the record straight, and we wrote a paper that explains that “SARS-CoV-2 Spike Protein Impairment of Endothelial Function Does Not Impact Vaccine Safety”. Unfortunately peer review has taken months, so it’s still not published. Therefore, we’ve decided to make the paper available prior to review below (as HTML) and here (as PDF). Abstract Lei et al. showed the spike protein in SARS-CoV-2 alone was enough to cause damage to lung vascular endothelium. The authors noted that their results suggest that “vaccination-generated antibody and/or exogenous antibody against S protein not only protects the host from SARS-CoV-2 infectivity but also inhibits S protein-imposed endothelial injury”. We show that there is no known mechanism by which the spike protein impairment of endothelial function could reduce vaccine safety, and that vaccine safety data clearly shows that the spike proteins in vaccines does not reduce vaccine safety. Overall, we conclude that spike proteins encoded by vaccines are not harmful and may be beneficial to vaccine recipients. Background COVID-19 has been widely understood to be a respiratory lung disease. However, there is now a growing consensus that SARS-CoV-2 also attacks the vascular system [Potus et al., 2020, Ackermann et al., 2020, Siddiqi et al., 2020, Teuwen et al., 2020]. Earlier studies of other coronaviruses have suggested that their spike proteins contributed to damaging vascular endothelial cells [Kuba et al., 2005]. Lei et al. created a pseudovirus surrounded by a SARS-CoV-2 crown of spike (S) proteins, but did not contain any actual virus, and found that exposure to this pseudovirus resulted in damage to the lungs and arteries of an animal model. They concluded that “S protein alone can damage vascular endothelial cells (ECs) by downregulating ACE2 and consequently inhibiting mitochondrial function”. Lei et al. noted that their conclusions suggest that vaccine-induced antibodies “not only protects the host from SARS-CoV-2 infectivity but also inhibits S protein-imposed endothelial injury”. However, they did not tackle the question of whether the findings of EC damage from S protein might also have an unintended negative side effect of reducing vaccine safety. Vaccine safety has become an important issue due to Vaccine-induced Immune Thrombotic Thrombocytopenia (VITT), also known as Vaccine-induced Immune Thrombocytopenia and Thrombosis, which has resulted in cases in recipients of the Oxford/AstraZeneca (AZ) and Johnson & Johnson (JJ) vaccines [Makris et al., 2021]. VITT refers to a rare combination of thrombosis (usually CVST) and thrombocytopenia which have been found in some patients 4 to 30 days after they receive their first AZ or JJ vaccine dose (and occasionally after their second dose). Regulators have found that clots are extremely rare, and that the benefits of the vaccines outweigh the risks. However, the roll-out of the AZ and JJ vaccines have been restricted in many jurisdictions [Mahase, 2021]. In the UK, for instance, the Joint Committee on Vaccination and Immunisation (JCVI) recommend avoiding the AZ vaccine for those under 40 years old, based on “reports of blood clotting cases in people who also had low levels of platelets in the UK, following the use of Oxford/AstraZeneca vaccine.” [Public Health England, 2021] With an Altmetric Attention Score of 3726 (as of May 23rd, 2021), Lei et al. has become the most discussed paper in the history of Circulation Research and is in the top 0.005% of discussed papers across all topics. By reading a random sample of social media posts that link to the paper, we found that the great majority of readers express a view that the paper shows that the vaccine is not safe, and that therefore people should not get vaccinated. This view has also been widely shared in blog posts, such as Adams , which states, “Bombshell Salk Institute science paper reveals the covid spike protein is what’s causing deadly blood clots and it’s in all the covid vaccines (by design)” and concludes “The vaccines literally inject people with the very substance that kills them. This isn’t medicine; its medical violence against humanity”. Furthermore, some doctors are now publicly expressing concerns about vaccine safety, based on concerns about the impact of spike proteins. [Bruno et al., 2021] Because Lei et al. did not explicitly discuss the relevance of its findings to vaccine safety, and because it has been widely cited as showing that vaccines are not safe, including by some doctors, we will examine whether its findings should result in pausing or stopping the vaccine rollouts. Analysis of Current Data To ascertain whether spike protein impairment of endothelial function reduces vaccine safety we can directly observe the results of vaccine use. The vaccine with the most recorded cases of VITT is the AZ vaccine. The largest roll-out of the AZ vaccine is in England. The roll-out began in December 2020, and by the start of February 2021 over 10 million people had received at least one dose. By mid-April 2021, over 10 million people had received their second dose. Public Health England publishes data on “Excess mortality in England”. This data shows that from March 20, 2020, until February 19, 2021, there were 101,486 excess deaths in England. From February 20, 2021 (two months after the start of England’s vaccine roll-out), until April 30, 2021 (the latest data available at writing), there have been no excess deaths in England. As of May 5, 2021, there were 262 reported cases of VITT in the UK after the first dose of the vaccine, resulting in 51 deaths, and eight cases have been reported after a second dose [Medicines & Healthcare products Regulatory Agency, 2021]. 35 million people had received their first vaccination by this time. This is over half the population of the UK, and nearly all adults over 30 years old. Children and young adults in the UK will not be receiving the AZ vaccine, based on current guidelines. Overall, with 51 deaths due to VITT, compared to 101,486 probably due to COVID-19, we can see that the overall impact of the vaccine is to greatly reduce deaths. Even if the spike proteins in vaccines resulted in reduced endothelial function (which, as we shall see shortly, they do not), the impact would clearly not be significant enough to result in the need to reduce or stop vaccine rollouts. All currently approved SARS-CoV-2 vaccines incorporate spike proteins. If the spike proteins in vaccines resulted in significantly reduced endothelial function, causing VITT, then we would expect to see reports of VITT in recipients of all the available vaccines. However, this is not the case. There are no reports of VITT in recipients of the Moderna or Pfizer vaccines. It is unlikely that this is due to failure to identify VITT, since the particular combination of thrombosis and thrombocytopenia is very rare, and the issues around vaccine safety widely reported and discussed. Furthermore, it is statistically unlikely. As of May 15, 2021, in the USA 156.2 million people had received at least one dose of SARS-CoV-2 vaccine, the vast majority of which were Pfizer and Moderna. Since each recipient’s vaccine VITT response is an independent binary event, we can model it with a binomial distribution. The UK VITT death rate is 0.0001%. If the spike proteins were the cause of VITT, we would expect the same death rate in the US, which would result in 183-273 deaths (99% confidence interval). However, we have seen zero reports of VITT in the US. Mechanism of genetically-encoded spike protein vaccines Lei et al. found that freely circulating, spike protein-decorated pseudovirus at a very high dosage (half a billion pseudovirus particles per animal) delivered directly to the trachea damages lung arterial endothelial cells in an animal model. Similarly, an extremely high concentration (4 micrograms per milliliter) of purified recombinant spike protein could damage human pulmonary arterial endothelial cells in vitro [Lei et al., 2021]. These extremely high concentrations were used to simulate what may happen during a severe case of COVID-19 infection, wherein humans may have what some have estimated to be as high as 1 to 100 billion virions in the lungs [Sender et al., 2020]. Given there are approximately 100 spike proteins per virion [Neuman et al., 2011], this means COVID-19 infections could in theory result in as many as 10 trillion spike proteins. In wild-type viruses, the spike protein is cleaved such that the S1 portion is released and can be free to circulate in the serum [Xia et al., 2020], where it could potentially interact with ACE2 receptors on the endothelium. Thus, in both the spike protein laboratory experiments described in Lei et al. and in severe COVID-19 cases, exceedingly large amounts of freely circulating spike protein are present. Animal studies have been performed to measure the distribution of genetically-encoded vaccines and their protein products. In the intramuscular injection site, which is by definition where the maximum amount of payload (i.e. lipid nanoparticles-packed mRNA or adenovirus) will be present and, by extension, where the maximum amount of spike protein will be produced, the payload is undetectable within 24-72 hours in vivo and the protein is undetectable within 10 days at most, and closer to 4 days post-injection when using lower doses more similar to that given to patients. Animal studies show there is some dispersal of payload to distal regions of the body, but as expected the concentrations dramatically decrease from maximum concentration at the injection site (5,680 ng/mL) to much lower concentrations elsewhere, for example they found >3000x lower concentrations (1.82 ng/mL) in the lung, and 10,000x lower concentrations in the brain (0.429 ng/mL) [Feldman et al., 2019]. Given only a fraction of the payload will be expressed and given that the measurements of mRNA do not necessarily distinguish between functional, full-length mRNA versus non-functional mRNA fragments, only a small fraction of the measured mRNA will be translated into spike protein. The distribution of actual spike protein throughout the body appears to follow an even steeper gradient — in vivo luciferase measurements in animals treated with mRNA vaccines show significant protein expression almost entirely confined to the site of injection [Pardi et al., 2015]. Note that the concentration given to patients is even lower than those used in these animal studies, and that the dispersion appears to drop off faster for lower doses. Overall, these data indicate relatively low, transient amounts of spike protein are produced by the vaccine, and the vast majority of spike protein produced is confined to the site of injection. Therefore, the concentration of freely circulating spike protein from vaccines available to the public is bound to be many orders of magnitude times lower than the amount used in Lei et al. . The impairment found in that study would not be expected from the relatively tiny, physiologically irrelevant amount of spike proteins found in a vaccine. In order to be physiologically relevant to, let alone damaging to blood vessels, freely moving, soluble spike proteins would have to enter the circulatory system at high enough concentrations to bind and disrupt a significant number of ACE2 receptors on a significant number of vascular endothelial cells. As discussed above, measurements indicate that no significant amount of vaccine enters circulation. The confinement of the expressed spike protein away from the circulatory system prevents it from causing significant damage. In addition to the confined localization of expression, there is another safeguard preventing spike protein from accessing the vascular endothelium in any significant amount: The vaccine uses an engineered form of the spike protein that is fused to a transmembrane anchor. The transmembrane anchor allows the spike protein to appear on the surface or membrane of the cell, but it is held in place by the anchor. This prevents the vast majority of spike protein from drifting away while at the same time creates a fixed target for the immune system to recognize and develop antibodies against the spike protein [Corbett et al., 2020]. While there is a chance for the mRNA-expressing cells to release full spike protein upon destruction by immune cells, the amount released is only going to be a small fraction of that produced by the vaccine, and certainly at too low a level to be physiologically relevant. In agreement with the mechanism-based estimates outlined above, Ogata et al. recently published empirical measurements of freely circulating spike protein produced by the vaccines using an ultra-sensitive SIMOA assay. Their measurements revealed the average spike protein levels to be less than 50 picograms per milliliter [Lei et al. 2021], which translates to 300 fM. In contrast, the dissociation constant for ACE2 is 15-40nM [Wang et al., 2020, Wrapp et al., 2020, Lan et al., 2020, Shang et al., 2020]. Thus, the femtomolar levels produced by the vaccines are approximately 100,000x lower than physiologically relevant concentrations, let alone pathological. Importantly, peak spike protein levels are reached within days after injection, and rapidly disappear to undetectable levels within 9 days of the first injection, and much lower to undetectable levels within 3 days after the second injection. At the same time, antibodies against spike protein are inversely correlated with circulating spike protein, supporting the hypothesis that anti-spike antibodies can quickly and effectively neutralize freely circulating spike protein. Adenovirus Vector-Based Vaccines and VITT Importantly, the endothelial damage described by Lei et al. is not the mechanism by which VITT occurs. VITT is an extremely rare and unique form of adverse event associated only with adenovirus vector-based vaccines. It is not caused by spike proteins targeting the endothelial cells, but rather due to induction of an immune response against platelet factor 4 (PF4) by adenovirus vector-based vaccines. PF4 is released by platelets and causes them to clump and form small clots and have a physiological role in stopping bleeding (hemostasis). Antibodies are not generated against self PF4, but have been described as a rare side effect of heparin, a commonly used blood thinner. In this condition termed heparin inducted thrombocytopenia (HIT), heparin binds to PF4, and the complex then stimulates an aberrant immune response. Antibodies to PF4 are generated, and these antibodies bind to PF4, and the resulting immune complex then binds to platelets and activates them. This releases more PF4, and a cycle ensues. Activated platelets in HIT form arterial and venous clots, and as platelets get consumed in the clots, the platelet count drops resulting in severe thrombocytopenia. This combination of clots and severely low platelets is unusual. The reason VITT raised an alarm even with very few cases was the unique HIT like clinical presentation in the absence of any heparin exposure. The seriousness of the condition, and the need to use a blood thinner besides heparin also made it an important clinical and management issue. Studies now show that in VITT, the adenovirus vector-based vaccine is able to induce high levels of PF4 antibodies in 1 in 100,000 to 1 in 500,000 individuals much the same way as heparin does in patients with HIT [Greinacher et al., 2021a]. Greinacher et al. [2021b] assessed the clinical and laboratory features of VITT patients and found that the AZ vaccine “can result in the rare development of immune thrombotic thrombocytopenia mediated by platelet-activating antibodies against PF4”. The AZ vaccine contains the preservative EDTA, which can help human cell-derived proteins from the vaccine enter the bloodstream, binding to PF4 and producing antibodies Greinacher et al. [2021a]. Lab tests showed that “High-titer anti-PF4 antibodies activate platelets and induce neutrophil activation and NETs [neutrophil extracellular traps] formation, fuelling the VITT prothrombotic response” Greinacher et al. [2021a]. Although the JJ vaccine does not use EDTA, it is an adenovirus vector-based vaccine, which is a particularly inflammatory stimulating virus [Appledorn et al., 2008, S Ahi et al., 2011]. The lack of EDTA may result in less cases of VITT, but even without EDTA proteins from the vaccine can enter the bloodstream. The hypothesis that the cause of VITT is due to acute inflammatory reactions to vaccine components independent of the spike protein in adenovirus vector-based vaccines is consistent both with the experimental results of Greinacher et al. [2021a], and is consistent with the observation that only the AZ and JJ vaccines (both of which are adenovirus vector-based vaccines) have been associated with VITT, whereas the Moderna and Pfizer vaccines (both of which are mRNA vaccines) have not been associated with VITT. Although the hypothesis of Greinacher et al. [2021a] has not yet been fully confirmed, it is consistent with lab testing, empirical evidence, the extreme rarity of VITT, and mechanistic constraints. It is also possible that it is remediable, since it is not due to the nature of the vaccine itself, but specific to the particulars of the formulation. Conclusion Given these observations, we conclude the vaccines do not produce enough freely circulating spike protein to induce vascular damage via the ACE2 receptor destabilization mechanism described in Lei et al. . On the contrary, the extremely low, femtomolar levels of circulating spike protein induced by the vaccine are unlikely to have any physiological relevance to vascular endothelial cells, while still allowing the immune system to develop a robust immune response to spike proteins. The presence of anti-spike antibodies may in fact serve to protect vaccinated individuals against not only SARS-CoV-2 infection, but also against spike-protein induced damage to the vascular endothelium. We speculate that this protection against spike protein-induced damage may in part explain why COVID19 symptoms are much less severe in vaccinated individuals Rossman et al. . There is now a very large amount of empirical data available that clearly shows the benefits of all approved SARSCoV-2 vaccines are far greater than the risks of extremely rare side effects. The data also is not consistent with the hypothesis that VITT is due to spike proteins, since the Pfizer and Moderna vaccines are not resulting in any reports of VITT. The data is, however, consistent with the hypothesis that side effects are due to inflammatory reactions to vaccine components in adenovirus vector-based vaccines. Overall, we conclude that all approved SARS-CoV-2 vaccines provide far more benefits than risks, and that the very rare risk of VITT from the AZ and JJ vaccines is not due to the spike proteins, which are a fundamental part of how the vaccines work, but is most likely due to specific details of the formulation of the vaccines. References Yuyang Lei, Jiao Zhang, Cara R. Schiavon, Ming He, Lili Chen, Hui Shen, Yichi Zhang, Qian Yin, Yoshitake Cho, Leonardo Andrade, Gerald S. Shadel, Mark Hepokoski, Ting Lei, Hongliang Wang, Jin Zhang, Jason X. J. Yuan, Atul Malhotra, Uri Manor, Shengpeng Wang, Zu-Yi Yuan, and John Y-J. Shyy. Sars-cov-2 spike protein impairs endothelial function via downregulation of ace 2. Circulation Research, 128(9):1323–1326, 2021. doi:10.1161/CIRCRESAHA.121.318902. Francois Potus, Vicky Mai, Marius Lebret, Simon Malenfant, Emilie Breton-Gagnon, Annie C Lajoie, Olivier Boucherat, Sebastien Bonnet, and Steeve Provencher. Novel insights on the pulmonary vascular consequences of covid-19. American Journal of Physiology-Lung Cellular and Molecular Physiology, 319(2):L277–L288, 2020. Maximilian Ackermann, Stijn E Verleden, Mark Kuehnel, Axel Haverich, Tobias Welte, Florian Laenger, Arno Vanstapel, Christopher Werlein, Helge Stark, Alexandar Tzankov, et al. Pulmonary vascular endothelialitis, thrombosis, and angiogenesis in covid-19. New England Journal of Medicine, 383(2):120 128, 2020. Hasan K Siddiqi, Peter Libby, and Paul M Ridker. Covid-19–a vascular disease. Trends in Cardiovascular Medicine, 2020. Laure-Anne Teuwen, Vincent Geldhof, Alessandra Pasut, and Peter Carmeliet. Covid-19: the vasculature unleashed. Nature Reviews Immunology, 20(7):389–391, 2020. Keiji Kuba, Yumiko Imai, Shuan Rao, Hong Gao, Feng Guo, Bin Guan, Yi Huan, Peng Yang, Yanli Zhang, Wei Deng, et al. A crucial role of angiotensin converting enzyme 2 (ace2) in sars coronavirus–induced lung injury. Nature medicine, 11(8):875–879, 2005. M Makris, S Pavord, W Lester, M Scully, and BJ Hunt. Vaccine-induced immune thrombocytopenia and thrombosis (vitt). Research and Practice in Thrombosis and Haemostasis, page e12529, 2021. Elisabeth Mahase. Astrazeneca vaccine: Blood clots are “extremely rare” and benefits outweigh risks, regulators conclude. BMJ, 373, 2021. doi:10.1136/bmj.n931. Public Health England. Jcvi advises on covid-19 vaccine for people aged under 40, May 2021. URL https://tinyurl.com/a8eud9a6. Mike Adams. Bombshell salk institute science paper reveals the covid spike protein is what’s causing deadly blood clots, Jul 2021. URL https://tinyurl.com/52pncva7. Roxana Bruno, Peter McCullough, Teresa Forcades i Vila, Alexandra Henrion-Caude, Teresa García-Gasca, Galina P Zaitzeva, Sally Priester, María J Martínez Albarracín, Alejandro Sousa-Escandon, Fernando López Mirones, et al. Sars-cov-2 mass vaccination: Urgent questions on vaccine safety that demand answers from international health agencies, regulatory authorities, governments and vaccine developers. Beaufort Observer, 2021. Medicines & Healthcare products Regulatory Agency. Coronavirus vaccine - weekly summary of yellow card reporting, May 2021. URL https://tinyurl.com/8xwydmyf. Ron Sender, Yinon Moise Bar-On, Avi Flamholz, Shmuel Gleizer, Biana Bernsthein, Rob Phillips, and Ron Milo. The total number and mass of sars-cov-2 virions in an infected person. medRxiv, 2020. Benjamin W Neuman, Gabriella Kiss, Andreas H Kunding, David Bhella, M Fazil Baksh, Stephen Connelly, Ben Droese, Joseph P Klaus, Shinji Makino, Stanley G Sawicki, et al. A structural analysis of m protein in coronavirus assembly and morphology. Journal of structural biology, 174(1):11–22, 2011. Shuai Xia, Qiaoshuai Lan, Shan Su, Xinling Wang, Wei Xu, Zezhong Liu, Yun Zhu, Qian Wang, Lu Lu, and Shibo Jiang. The role of furin cleavage site in sars-cov-2 spike protein-mediated membrane fusion in the presence or absence of trypsin. Signal transduction and targeted therapy, 5(1):1–3, 2020. Robert A Feldman, Rainard Fuhr, Igor Smolenov, Amilcar Mick Ribeiro, Lori Panther, Mike Watson, Joseph J Senn, Mike Smith, rn Almarsson, Hari S Pujar, et al. mrna vaccines against h10n8 and h7n9 influenza viruses of pandemic potential are immunogenic and well tolerated in healthy adults in phase 1 randomized clinical trials. Vaccine, 37 (25):3326–3334, 2019. Norbert Pardi, Steven Tuyishime, Hiromi Muramatsu, Katalin Kariko, Barbara L Mui, Ying K Tam, Thomas D Madden, Michael J Hope, and Drew Weissman. Expression kinetics of nucleoside-modified mrna delivered in lipid nanoparticles to mice by various routes. Journal of Controlled Release, 217:345–351, 2015. Kizzmekia S Corbett, Darin K Edwards, Sarah R Leist, Olubukola M Abiona, Seyhan Boyoglu-Barnum, Rebecca A Gillespie, Sunny Himansu, Alexandra Schäfer, Cynthia T Ziwawo, Anthony T DiPiazza, et al. Sars-cov-2 mrna vaccine design enabled by prototype pathogen preparedness. Nature, 586(7830):567–571, 2020. Alana F Ogata, Chi-An Cheng, Michaël Desjardins, Yasmeen Senussi, Amy C Sherman, Megan Powell, Lewis Novack, Salena Von, Xiaofang Li, Lindsey R Baden, and David R Walt. Circulating SARS-CoV-2 Vaccine Antigen Detected in the Plasma of mRNA-1273 Vaccine Recipients. Clinical Infectious Diseases, 05 2021. ISSN 1058-4838. doi:10.1093/cid/ciab465. ciab465. Qihui Wang, Yanfang Zhang, Lili Wu, Sheng Niu, Chunli Song, Zengyuan Zhang, Guangwen Lu, Chengpeng Qiao, Yu Hu, Kwok-Yung Yuen, et al. Structural and functional basis of sars-cov-2 entry by using human ace2. Cell, 181 (4):894–904, 2020. Daniel Wrapp, Nianshuang Wang, Kizzmekia S Corbett, Jory A Goldsmith, Ching-Lin Hsieh, Olubukola Abiona, Barney S Graham, and Jason S McLellan. Cryo-em structure of the 2019-ncov spike in the prefusion conformation. Science, 367(6483):1260–1263, 2020. Jun Lan, Jiwan Ge, Jinfang Yu, Sisi Shan, Huan Zhou, Shilong Fan, Qi Zhang, Xuanling Shi, Qisheng Wang, Linqi Zhang, et al. Structure of the sars-cov-2 spike receptor-binding domain bound to the ace2 receptor. Nature, 581 (7807):215–220, 2020. Jian Shang, Gang Ye, Ke Shi, Yushun Wan, Chuming Luo, Hideki Aihara, Qibin Geng, Ashley Auerbach, and Fang Li. Structural basis of receptor recognition by sars-cov-2. Nature, 581(7807):221–224, 2020. Andreas Greinacher, Kathleen Selleng, Jan Wesche, Stefan Handtke, Raghavendra Palankar, Konstanze Aurich, Michael Lalk, Karen Methling, Uwe Völker, Christian Hentschker, et al. Towards understanding chadox1 ncov19 vaccine-induced immune thrombotic thrombocytopenia (vitt). Research Square, 2021a. Andreas Greinacher, Thomas Thiele, Theodore E Warkentin, Karin Weisser, Paul A Kyrle, and Sabine Eichinger. Thrombotic thrombocytopenia after chadox1 ncov-19 vaccination. New England Journal of Medicine, 2021b. DM Appledorn, A McBride, S Seregin, JM Scott, Nathan Schuldt, A Kiang, S Godbehere, and A Amalfitano. Complex interactions with several arms of the complement system dictate innate and humoral immunity to adenoviral vectors. Gene therapy, 15(24):1606–1617, 2008. Yadvinder S Ahi, Dinesh S Bangari, and Suresh K Mittal. Adenoviral vector immunity: its implications and circumvention strategies. Current gene therapy, 11(4):307–320, 2011. Hagai Rossman, Smadar Shilo, Tomer Meir, Malka Gorfine, Uri Shalit, and Eran Segal. Covid-19 dynamics after a national immunization program in israel. Nature medicine, pages 1–7, 2021. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/IPadOS] | [TOKENS: 1125] |
Contents iPadOS iPadOS is a mobile operating system developed by Apple for its iPad line of tablet computers. It was given a name distinct from iOS, the operating system used by Apple's iPhones, to reflect the diverging features of the two product lines, such as multitasking. It was introduced as iPadOS 13, reflecting its status as the successor to iOS 12 for the iPad, and first released to the public on September 24, 2019. Major versions of iPadOS are released annually; the current stable version, iPadOS 26.3, was released to the public on February 11, 2026. History The first iPad was introduced on April 3, 2010, and ran iPhone OS 3.2, which added support for the larger device to the operating system, previously only used on the iPhone and its smaller counterpart, the iPod touch. This shared operating system was rebranded as iOS with the release of iOS 4 in June 2010. The operating system initially had rough feature parity running on the iPhone, iPod Touch, and iPad, with variations in user interface depending on screen size, and minor differences in the selection of apps included. However, over time, the variant of iOS for the iPad incorporated a growing set of differentiating features, such as picture-in-picture, the ability to display multiple running apps simultaneously (both introduced with iOS 9 in 2015), drag and drop, and a dock that more closely resembled the dock from macOS than the one on the iPhone (added in 2017 with iOS 11). Standard iPad apps were increasingly designed to support the optional use of a physical keyboard. To emphasize the different feature set available on the iPad, and to signal their intention to develop the platforms in divergent directions, at the Worldwide Developers Conference (WWDC) 2019, Apple announced that the variant of iOS that runs on the iPad would be rebranded as "iPadOS". The new naming strategy began with iPadOS 13.1, in 2019. On June 22, 2020, at WWDC 2020, Apple announced iPadOS 14, with compact designs for search, Siri, and calls, improved app designs, handwriting recognition, better AR features, enhanced privacy protections, and app widgets. iPadOS 14 was released to the public on September 16, 2020. On June 7, 2021, at WWDC 2021, iPadOS 15 was announced with widgets on the Home Screen and App Library, the same features that came to iPhone with iOS 14 in 2020. The update also brought stricter privacy measurements with Safari such as IP Address blocking so other websites cannot see it. iPadOS 15 was released to the public on September 20, 2021. On June 6, 2022, at WWDC 2022, iPadOS 16 was announced with a Weather app and Stage Manager, along with most of the features included in iOS 16, excluding a customizable lock screen. On June 5, 2023, at WWDC 2023, Apple announced iPadOS 17 with support for widgets for the lock screen, a feature originally launched with iOS 16, along with the majority of features announced included in iOS 17. In addition, iPadOS 17 now includes the Apple Health app. On June 10, 2024, at WWDC 2024, Apple announced iPadOS 18. On June 9, 2025, at WWDC 2025, Apple announced iPadOS 26. Features Many features of iPadOS are also available on iOS; however, iPadOS contains some features that are not available in iOS and lacks some features that are available in iOS. Introduced in iPadOS 14, Scribble converts text handwritten by an Apple Pencil into typed text in most text fields. Beginning with iPadOS 15, widgets can be placed on the home screen. Beginning with iPadOS 15, Translate is available. The feature was announced on June 7, 2021, at WWDC 2021. Translation works with 11 languages.[citation needed] Beginning with iPadOS 16, the Weather app was added to iPad. The application had previously only been available on the iPhone and iPod Touch. The feature was announced on June 6, 2022, at WWDC 2022. iPadOS 16 adds a new feature called Stage Manager that automatically sorts windows by app. iPadOS 17 allows users to personalize their Lock Screens with widgets and fonts. Interactive widgets can be placed on both the Lock Screen and Home Screen for quick access to customizable information, such as weather and reminders. The introduction of the Health app on iPad provides a central location to view and manage health data. New communication features for Messages and FaceTime are introduced, such as audio/voice messages (like a voicemail) if your call is not answered on FaceTime, and using your Apple TV for FaceTime calls with your iPad acting as a camera. Messages introduces the combining of search filters and content types when searching through your messages, as well as transcriptions for audio messages. New accessibility features like Screen Distance and improved Voice Control expand usability options for a wider range of users. Several core apps receive updates, including Photos, Safari, Notes, and Reminders. These updates bring new functionality and improvements to enhance the overall iPad experience. iPadOS 26 has a new Liquid Glass design, new apps, and more. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/General_relativity] | [TOKENS: 12364] |
Contents General relativity General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in May 1916 and is the accepted description of the gravitation of macroscopic objects in modern physics. General relativity generalizes special relativity and refines Isaac Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy, momentum and stress of whatever is present, including matter and radiation. The relation is specified by the Einstein field equations, a system of second-order partial differential equations. John Archibald Wheeler summarized it: "Space-time tells matter how to move; matter tells space-time how to curve." Newton's law of universal gravitation, which describes gravity in classical mechanics, can be seen as a prediction of general relativity for the almost flat spacetime geometry around stationary mass distributions. Some predictions of general relativity, however, are beyond Newton's law of universal gravitation in classical physics. These predictions concern the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light, and include gravitational time dilation, gravitational lensing, the gravitational redshift of light, the Shapiro time delay and singularities/black holes. So far, all tests of general relativity have been in agreement with the theory. The time-dependent solutions of general relativity enable us to extrapolate the history of the universe into the past and future, and have provided the modern framework for cosmology, thus leading to the discovery of the Big Bang and cosmic microwave background radiation. Despite the introduction of a number of alternative theories, general relativity continues to be the simplest theory consistent with experimental data. Reconciliation of general relativity with the laws of quantum physics remains a problem, however, as no self-consistent theory of quantum gravity has been found. It is not yet known how gravity can be unified with the three non-gravitational interactions: strong, weak and electromagnetic. Einstein's theory has astrophysical implications, including the prediction of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape from them. Black holes are the end-state for massive stars. Microquasars and active galactic nuclei are believed to be stellar black holes and supermassive black holes. It also predicts gravitational lensing, where the bending of light results in distorted and multiple images of the same distant astronomical phenomenon. Other predictions include the existence of gravitational waves, which have been observed directly by the physics collaboration LIGO and other observatories. In addition, general relativity has provided the basis for cosmological models of an expanding universe. Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories. History Henri Poincaré's 1905 theory of the dynamics of the electron was a relativistic theory which he applied to all forces, including gravity. While others thought that gravity was instantaneous or of electromagnetic origin, he suggested that relativity was "something due to our methods of measurement". In his theory, he showed that gravitational waves propagate at the speed of light. Soon afterwards, Einstein started thinking about how to incorporate gravity into his relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall (FFO), he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are known as the Einstein field equations, which form the core of Einstein's general theory of relativity. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity. This idea was pointed out by mathematician Marcel Grossmann and published by Grossmann and Einstein in 1913. The Einstein field equations are nonlinear and are considered difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But in 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, eventually resulting in the Reissner–Nordström solution, which is associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that the universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which the universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life. During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein showed in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"), and in 1919 an expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of 29 May 1919, instantly making Einstein famous. Yet the theory remained outside the mainstream of theoretical physics and astrophysics until developments between approximately 1960 and 1975 known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology also became amenable to direct observational tests. General relativity has acquired a reputation as a theory of extraordinary beauty. Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory. Other elements of beauty associated with the general theory of relativity are its simplicity and symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency. In the preface to Relativity: The Special and the General Theory, Einstein said "The present book is intended, as far as possible, to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics. The work presumes a standard of education corresponding to that of a university matriculation examination, and, despite the shortness of the book, a fair amount of patience and force of will on the part of the reader. The author has spared himself no pains in his endeavour to present the main ideas in the simplest and most intelligible form, and on the whole, in the sequence and connection in which they actually originated." From classical mechanics to general relativity General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity. At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime. Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in an enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration. Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, spacetime as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The resulting Newton–Cartan theory is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass. As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations, boosts and reflections.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena. With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event A, there is a set of events that can, in principle, either influence or be influenced by A via signals or interactions that do not need to travel faster than light (such as event B in the image), and a set of events for which such an influence is impossible (such as event C in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure or conformal geometry. Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall motion, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry. A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity. The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish). Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the stress–energy tensor, which includes both energy and momentum densities as well as stress: pressure and shear. Using the equivalence principle, this tensor is readily generalized to curved spacetime. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the stress–energy tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the stress–energy tensor, and hence of whatever is on the other side of the equation, is zero—the simplest nontrivial set of equations are what are called Einstein's (field) equations: G μ ν ≡ R μ ν − 1 2 R g μ ν = κ T μ ν {\displaystyle G_{\mu \nu }\equiv R_{\mu \nu }-{\tfrac {1}{2}}R\,g_{\mu \nu }=\kappa T_{\mu \nu }\,} On the left-hand side is the Einstein tensor, G μ ν {\displaystyle G_{\mu \nu }} , which is symmetric and a specific divergence-free combination of the Ricci tensor R μ ν {\displaystyle R_{\mu \nu }} and the metric. In particular, R = g μ ν R μ ν {\displaystyle R=g^{\mu \nu }R_{\mu \nu }} is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as R μ ν = R α μ α ν . {\displaystyle R_{\mu \nu }={R^{\alpha }}_{\mu \alpha \nu }.} On the right-hand side, κ {\displaystyle \kappa } is a constant and T μ ν {\displaystyle T_{\mu \nu }} is the stress–energy tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant κ {\displaystyle \kappa } is found to be κ = 8 π G / c 4 {\textstyle \kappa ={8\pi G}/{c^{4}}} , where G {\displaystyle G} is the Newtonian constant of gravitation and c {\displaystyle c} the speed of light in vacuum. When there is no matter present, so that the stress–energy tensor vanishes, the results are the vacuum Einstein equations, R μ ν = 0. {\displaystyle R_{\mu \nu }=0.} In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic. The geodesic equation is: d 2 x μ d s 2 + Γ μ α β d x α d s d x β d s = 0 , {\displaystyle {d^{2}x^{\mu } \over ds^{2}}+\Gamma ^{\mu }{}_{\alpha \beta }{dx^{\alpha } \over ds}{dx^{\beta } \over ds}=0,} where s {\displaystyle s} is a scalar parameter of motion (e.g. the proper time), and Γ μ α β {\displaystyle \Gamma ^{\mu }{}_{\alpha \beta }} are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices α {\displaystyle \alpha } and β {\displaystyle \beta } . The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four spacetime coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation. In general relativity, the effective gravitational potential energy of an object of mass m revolving around a massive central body M is given by U f ( r ) = − G M m r + L 2 2 m r 2 − G M L 2 m c 2 r 3 {\displaystyle U_{f}(r)=-{\frac {GMm}{r}}+{\frac {L^{2}}{2mr^{2}}}-{\frac {GML^{2}}{mc^{2}r^{3}}}} A conservative total force can then be obtained as its negative gradient F f ( r ) = − G M m r 2 + L 2 m r 3 − 3 G M L 2 m c 2 r 4 {\displaystyle F_{f}(r)=-{\frac {GMm}{r^{2}}}+{\frac {L^{2}}{mr^{3}}}-{\frac {3GML^{2}}{mc^{2}r^{4}}}} where L is the angular momentum. The first term represents the force of Newtonian gravity, which is described by the inverse-square law. The second term represents the centrifugal force in the circular motion. The third term represents the relativistic effect. There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory. Definition and basic applications The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building. General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the distribution of energy, momentum and stress contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the stress–energy of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve. While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and low speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation. As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background-independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance. The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's stress–energy tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present. Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub–NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture). Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories. Consequences of Einstein's theory General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of many years of research that followed Einstein's initial publication. Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation. Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the existing level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid. In the vicinity of a non-rotating sphere, the time dilation due to gravity, derived from the Schwarzschild metric, is t 0 = t f 1 − 2 G M r c 2 {\displaystyle t_{0}=t_{f}{\sqrt {1-{\frac {2GM}{rc^{2}}}}}} where General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a massive object. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun. This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity. Closely related to light deflection is the Shapiro time delay, the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space. Predicted in 1916 by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging. The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, linear approximations of gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by 10−21 or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed. Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are the only way to construct appropriate models. General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction. In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations. The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude. In general relativity the perihelion shift σ {\displaystyle \sigma } , expressed in radians per revolution, is approximately given by: σ = 24 π 3 L 2 T 2 c 2 ( 1 − e 2 ) , {\displaystyle \sigma ={\frac {24\pi ^{3}L^{2}}{T^{2}c^{2}(1-e^{2})}}\ ,} where: According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are immensely compact, significant amounts of energy are emitted in the form of gravitational radiation. The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737−3039, where both stars are pulsars and which was last reported to also be in agreement with general relativity in 2021 after 16 years of observations. Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%. Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used. Astrophysical applications The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs. The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed. Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies. Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of contemporary relativity-related research. Several land-based gravitational wave detectors are in operation, for example the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 hertz frequency range, which originate from binary supermassive blackholes. A European space-based detector, eLISA / NGO, is under development, with a precursor mission (LISA Pathfinder) having launched in December 2015. Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger. Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures. Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, especially diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed. General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory. Black holes are also sought-after targets in the search for gravitational waves (cf. section § Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry. The existing models of cosmology are based on Einstein's field equations, which include the cosmological constant Λ {\displaystyle \Lambda } since it has important influence on the large-scale dynamics of the cosmos, R μ ν − 1 2 R g μ ν + Λ g μ ν = 8 π G c 4 T μ ν {\displaystyle R_{\mu \nu }-{\textstyle 1 \over 2}R\,g_{\mu \nu }+\Lambda \ g_{\mu \nu }={\frac {8\pi G}{c^{4}}}\,T_{\mu \nu }} where g μ ν {\displaystyle g_{\mu \nu }} is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation. Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of the universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear. An inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there are a bewildering variety of possible inflationary scenarios, which cannot be restricted by existing observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below). Kurt Gödel showed that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Stephen Hawking introduced chronology protection conjecture, which is an assumption beyond those of standard general relativity to prevent time travel. Some exact solutions in general relativity such as Alcubierre drive offer examples of warp drive but these solutions require exotic matter distribution, and generally suffer from semiclassical instability. Advanced concepts The spacetime symmetry group for special relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries, if any, might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group. In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances. It turns out that the BMS symmetry, suitably modified, could be seen as a restatement of the universal soft graviton theorem in quantum field theory (QFT), which relates universal infrared (soft) QFT with GR asymptotic spacetime symmetries. In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event A can reach any other location X before light sent out at A to X. In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams. Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of energy conditions) are used to derive general results. Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius, given by the equation r s = 2 G M c 2 , {\displaystyle r_{\text{s}}={\frac {2GM}{c^{2}}},} ), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's horizon, is not a physical barrier. Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass–energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorem: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple. Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for the black hole area to decrease as long as other processes ensure that entropy increases overall. As thermodynamical objects with nonzero temperature, black holes should emit thermal radiation. Semiclassical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below). There are many other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semiclassical radiation known as Unruh radiation. Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well. Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity. Each solution of Einstein's equation encompasses the whole history of a universe—it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories. To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity. The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy. Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture. Relationship with quantum theory If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid-state physics, would be the other. However, how to reconcile quantum theory with general relativity is still an open question. Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes. The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of candidates exist. Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability"). One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables, this leads to a model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps. Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge calculus, dynamical triangulations, causal sets, twistor models or the path integral based models of quantum cosmology. All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available. Current status General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far unambiguously fitted observational and experimental data. However, there are strong theoretical reasons to consider the theory to be incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could also indicate the need to consider alternatives or modifications of general relativity. Even taken as is, general relativity provides many possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, while numerical relativists run increasingly powerful computer simulations, such as those describing merging black holes. In February 2016, it was announced that gravitational waves were directly detected by the Advanced LIGO team on 14 September 2015. A century after its introduction, general relativity remains a highly active area of research. See also References Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_3] | [TOKENS: 10227] |
Contents PlayStation 3 The PlayStation 3 is a home video game console developed and marketed by Sony Computer Entertainment (SCE). It is the successor to the PlayStation 2, and both are part of the PlayStation brand of consoles. The PS3 was first released on November 11, 2006, in Japan, followed by November 17 in North America and March 23, 2007, in Europe and Australasia. It competed primarily with Microsoft's Xbox 360 and Nintendo's Wii as part of the seventh generation of video game consoles. The PlayStation 3 was built around the custom-designed Cell Broadband Engine processor, co-developed with IBM and Toshiba. SCE president Ken Kutaragi envisioned the console as a supercomputer for the living room, capable of handling complex multimedia tasks. It was the first console to use the Blu-ray disc as its primary storage medium, the first to be equipped with an HDMI port, and the first capable of outputting games in 1080p (Full HD) resolution. It also launched alongside the PlayStation Network online service and supported Remote Play connectivity with the PlayStation Portable and PlayStation Vita handheld consoles. In September 2009, Sony released the PlayStation 3 Slim, which removed hardware support for PlayStation 2 games (though limited software-based emulation remained) and introduced a smaller, more energy-efficient design. A further revision, the Super Slim, was released in late 2012, offering additional refinements to the console's form factor. At launch, the PS3 received a mixed reception, largely due to its high price—US$599 (equivalent to $960 in 2025) for the 60 GB model and $499 (equivalent to $800 in 2025) for the 20 GB model—as well as its complex system architecture and limited selection of launch titles. The hardware was also costly to produce, and Sony sold the console at a significant loss for several years. However, the PS3 was praised for its technological ambition and support for Blu-ray, which helped Sony establish the format as the dominant standard over HD DVD. Reception improved over time, aided by a library of critically acclaimed games, the Slim and Super Slim hardware revisions that reduced manufacturing costs, and multiple price reductions. These factors helped the console recover commercially. Ultimately, the PS3 sold approximately 87.4 million units worldwide, narrowly surpassing the Xbox 360 and becoming the eighth best-selling console of all time. As of early 2019,[update] nearly 1 billion PlayStation 3 games had been sold worldwide. The PlayStation 4 was released in November 2013 as the PS3's successor. Sony began phasing out the PlayStation 3 within two years.[b] Shipments ended in most regions by 2016,[c] with final production continuing for the Japanese market until May 29, 2017.[d] History Development of the PlayStation 3 began on March 9, 2001, when Sony Computer Entertainment president Ken Kutaragi announced a partnership with Toshiba and IBM to develop the Cell microprocessor. Around the same time, Shuhei Yoshida led a team focused on exploring next-generation game development. By early 2005, Sony shifted its focus toward preparing PS3 launch titles. In September 2004, Sony confirmed that the PlayStation 3 would use Blu-ray as its primary media format, with support for DVDs and CDs. Nvidia was announced as the partner for the console's graphics processing unit in December 2004. The PS3 was officially unveiled on May 16, 2005, at E3, alongside a prototype of the Sixaxis controller featuring a boomerang-shaped design. No working hardware was present at E3 or at the Tokyo Game Show in September, though demonstrations such as Metal Gear Solid 4: Guns of the Patriots were shown running on software development kits and comparable PC hardware. Sony also showcased concept footage based on projected system specifications, including a Final Fantasy VII tech demo. The 2005 prototype included two HDMI ports, three Ethernet ports and six USB ports, but by E3 2006, these had been reduced to one HDMI, one Ethernet, and four USB ports to cut costs. Sony also announced two launch models: a 60 GB version at US$599.99 / ¥60,000 / €599.99 and a 20 GB version at US$499.99 / ¥49,980 / €499.99. To further reduce costs, the 60 GB model was to be the only configuration to feature HDMI output, Wi-Fi, flash card readers and chrome trim. It was scheduled to launch on November 11, 2006, in Japan and November 17, 2006, in North America and Europe. On September 6, 2006, Sony delayed the PAL region launch to March 2007 due to a shortage of Blu-ray drive components, and announced it would not sell the 20 GB model in the region. Later that month at the Tokyo Game Show, Sony confirmed that it had decided to include HDMI output on the 20 GB model. The Japanese launch price for the 20 GB model was also reduced by more than 20%, while the 60 GB model would be sold under an open pricing scheme. Sony showcased 27 playable titles running on final PS3 hardware at the event. Despite the cost-cutting efforts, the PS3 would still be sold at a loss due to high component costs, including the GPU (estimated at US$129) and Blu-ray Disc drive (estimated at US$125). The 20 GB model was estimated to cost US$805.85 to manufacture, about US$307 more than its retail price, while the 60 GB model was estimated at US$840.35, or US$241 above its retail price. Subsidizing the hardware contributed to SCE reporting an operating loss of ¥232 billion (approximately US$1.91 billion) for the fiscal year following the launch of the PS3. Sony later acknowledged cumulative losses of about US$3.3 billion on the PS3 hardware through mid‑2008. The PlayStation 3 was first released in Japan on November 11, 2006, selling over 81,000 units within 24 hours. It launched in North America on November 17, where demand was high and incidents of violence were reported at retail locations. The console was released the same day in Hong Kong and Taiwan. The console launched in Europe, Australia, and other PAL regions on March 23, 2007. It sold 600,000 units across Europe in its first two days, with 165,000 sold in the UK, making it the region's fastest-selling home console at the time. Sales dropped sharply in the following weeks, with some retailers citing high price points and early cancellations. The PS3 launched in other markets throughout 2007, including Singapore (March 7), India (April 27), Mexico (April), and South Korea (June 16). Sony often hosted promotional events or offered bundled content in these regions to drive interest. Following months of speculation, Sony officially unveiled the "Slim" hardware revision (model CECH-2000) on August 18, 2009, during its Gamescom press conference, and it was released in major territories by September 2009. This model featured a significantly slimmer and lighter chassis, reduced power consumption, and a quieter cooling system. These improvements were made possible in part by transitioning to smaller fabrication processes for the system's CPU and GPU. The manufacturing changes reportedly reduced production costs by about 70 percent. Nevertheless, due to the console's simultaneous price reduction to US$299, Sony was still estimated to be losing around US$37 per unit at launch, with losses per unit reduced to approximately US$18 by early 2010. Sony announced the "Super Slim" hardware revision (model CECH-4000) in September 2012, which launched in major markets later that year. Compared to the previous "Slim" model, the new chassis was approximately 20 percent smaller and 25 percent lighter, featured reduced power consumption, and replaced the slot-loading disc drive with a top-loading drive, changes that further lowered manufacturing costs. While the redesigned disc mechanism helped cut costs and save space, reviewers criticized it as feeling cheap and described it as "ultimately a step back". The Super Slim was offered with larger 250 GB and 500 GB hard drives, as well as a low-cost model featuring 16 GB of eMMC flash storage, with the option to install a hard drive later. Games PlayStation 3 launched in North America with 14 titles, with Resistance: Fall of Man emerging as the top seller. The game received critical acclaim and was named PS3 Game of the Year by both GameSpot and IGN. Some anticipated titles, such as The Elder Scrolls IV: Oblivion and F.E.A.R., missed the launch window and arrived in early 2007. In Japan, Ridge Racer 7 led launch sales, while the European launch featured 24 titles, including MotorStorm and Virtua Fighter 5. MotorStorm and Resistance: Fall of Man became the platform's most successful titles of 2007, and each received sequels. At E3 2007, Sony showcased its upcoming first-party lineup including Uncharted: Drake's Fortune, Ratchet & Clank Future: Tools of Destruction, and Warhawk, along with future titles such as Killzone 2, LittleBigPlanet, and Gran Turismo 5 Prologue. Key third-party games such as Metal Gear Solid 4: Guns of the Patriots, Assassin's Creed, Call of Duty 4: Modern Warfare, and Grand Theft Auto IV also helped drive platform momentum. Sony introduced stereoscopic 3D support to PS3 via firmware updates beginning in 2010. The technology was first demonstrated in the January 2009 Consumer Electronics Show, with Wipeout HD and Gran Turismo 5 Prologue used to show how the technology would work. Firmware update 3.30 enabled 3D gaming, while 3.50 added support for 3D movie playback. As of early 2019,[update] nearly 1 billion PS3 games had been sold worldwide. The platform's best-selling titles include Grand Theft Auto V,[e] Gran Turismo 5, The Last of Us, and the Uncharted franchise. Hardware The PlayStation 3 retained the same basic design across its three major hardware revisions, featuring a black plastic shell with a convex top when placed horizontally, or a convex-left side when oriented vertically. The original model used glossy piano black plastic and featured a logo inspired by the font used in the 2002 Spider-Man film, also produced by Sony. According to PlayStation designer Teiyu Goto, this logo was one of the first design elements selected by SCEI president Ken Kutaragi and helped shape the console's overall aesthetic. The font would be abandoned at the introduction of the "Slim" revision in favor of an updated version of the PS2 logo with more curved edges, a design that would remain in use for the PS4 and PS5 logos. The PlayStation 3 is powered by the Cell Broadband Engine, a 64-bit CPU co-developed by Sony, Toshiba and IBM. It includes a 3.2 GHz PowerPC-based Power Processing Element (PPE) and seven Synergistic Processing Elements (SPEs). To improve manufacturing yield, the processor is initially fabricated with eight SPEs. After production, each chip is tested, and if a defect is found in one SPE, it is disabled using laser trimming. This approach minimizes waste by utilizing processors that would otherwise be discarded. Even in chips without defects, one SPE is intentionally disabled to ensure consistency across units. Of the seven operational SPEs, six are available for developers to use in games and applications, while the seventh is reserved for the console's operating system. The Cell processor is paired with 256 MB of high-bandwidth XDR DRAM. Graphics processing is managed by the Reality Synthesizer (RSX), developed by Nvidia and paired with 256 MB of GDDR3 SDRAM video memory. The RSX chip can produce resolutions ranging from standard-definition (480i/576i) up to high-definition (1080p). Initially, Sony's hardware development team did not plan to include a dedicated GPU, believing the Cell processor could handle all graphics processing tasks. However, game developers, including Sony's ICE team (the central graphics technologies group for its game studios), demonstrated that without a dedicated GPU, the PlayStation 3's performance would fall short, particularly when compared to the Xbox 360. This feedback prompted the late-stage addition of the RSX GPU during the console's development. Physical media games for the PlayStation 3 were sold on Blu-ray discs and the console features a 2× speed drive which is also capable of reading Blu-ray movies, DVDs, and CDs. Early models came with 20 GB or 60 GB hard drives,[f] with later versions offering up to 500 GB. (see: model comparison) All models have user-upgradeable hard drives. Connectivity options include Bluetooth 2.0 (supporting up to seven devices), Gigabit Ethernet, USB 2.0, and HDMI 1.4.[a] All models except one early version feature built-in Wi-Fi,[g] and some early units include flash card readers for Memory Stick, SD and CompactFlash formats.[h] The PlayStation 3 was released in three main designs: the original, the Slim, and the Super Slim. These revisions introduced changes such as reduced power consumption, smaller form factors, expanded storage, and the removal of certain features to lower costs. The standard controller at the PlayStation 3's launch in 2006 was the wireless Sixaxis, which featured a built-in accelerometer capable of motion sensing across three directional and three rotational axes—six in total, hence the name. However, it lacked vibration functionality. In late 2007, Sony released the DualShock 3, which added vibration support while retaining all motion-sensing features. Numerous other accessories for the console were also developed including the Logitech Driving Force GT, the Logitech Cordless Precision Controller, the Blu-ray Disc Remote, and the PlayTV DVB-T tuner and digital video recorder. In response to the popularity of Nintendo’s motion controls on the Wii, Sony introduced the PlayStation Move in 2010. Its wand-style controllers use internal inertial sensors and a glowing orb tracked by the PlayStation Eye camera to enable precise motion-controlled gameplay. In September 2009, the BBC television program Watchdog aired a report investigating hardware failures in the PlayStation 3, referring to the issue as the "yellow light of death" (YLOD). The report claimed that affected consoles typically failed 18–24 months after purchase—outside of Sony's standard one-year warranty. After this period, users were required to pay a fixed fee to receive a refurbished replacement console from Sony. However, according to Ars Technica, the failure rate of PlayStation 3 consoles remained within the expected range for consumer electronics. A 2009 study by warranty provider SquareTrade found a two-year failure rate of 10% for the PlayStation 3. Sony said its internal data indicated that only about 0.5% of consoles were returned with symptoms of the YLOD. In response to the Watchdog report, Sony issued a statement disputing the accuracy and tone of the report, arguing it was misleading. Beyond gaming, the PlayStation 3's hardware was embraced by researchers for high-performance computing applications. Thanks in part to Sony's early support for third-party operating systems, the PS3 was repurposed for tasks ranging from academic research to distributed computing. Dr. Frank Mueller of North Carolina State University clustered eight PS3s in 2007 using Fedora Linux and open-source toolsets. Although limited by the console's 256 MB of RAM, he called the system a cost-effective entry point into parallel computing. Sony and Stanford University also launched the Folding@home client, allowing PS3 owners to contribute processing power to study protein folding for disease research. The U.S. military recognized the PS3's computing eventually potential as well. In 2010, the Air Force Research Laboratory built the Condor Cluster using 1,760 PS3 consoles, achieving 500 trillion floating-point operations per second. At the time, it was the 33rd most powerful supercomputer in the world and was used for analyzing high-resolution satellite imagery. The PS3 was also employed in cybersecurity research; in 2008, a group of researchers used a 200-console cluster to crack SSL encryption. These unconventional applications were curtailed by later hardware revisions that removed support for third-party operating systems. Software Sony included the ability for the operating system, referred to as System Software, to be updated. The updates can be acquired in several ways: PlayStation 3 initially shipped with the ability to install an alternative operating system alongside the main system software. Linux and other Unix-based operating systems were available. The hardware allowed access to six of the seven Synergistic Processing Elements of the Cell microprocessor, but not the RSX 'Reality Synthesizer' graphics chip. The 'OtherOS' functionality was not present in the updated PS Slim models, and the feature was subsequently removed from previous versions of the PS3 as part of the machine's firmware update version 3.21 which was released on April 1, 2010; Sony cited security concerns as the rationale. The firmware update 3.21 was mandatory for access to the PlayStation Network. Eventually third-parties released a modified and unofficial version of the firmware to restore the feature. The removal caused some controversy; as the update removed officially advertised features from already sold products, and gave rise to several class action lawsuits aimed at making Sony return the feature or provide compensation. On December 8, 2011, U.S. District Judge Richard Seeborg dismissed the last remaining count of the class action lawsuit (other claims in the suit had previously been dismissed), stating: "As a legal matter, ... plaintiffs have failed to allege facts or articulate a theory on which Sony may be held liable." As of January 2014[update], the U.S. Court of Appeals for the Ninth Circuit partially reversed the dismissal and had sent the case back to the district court. The standard PlayStation 3 version of the XrossMediaBar (pronounced "Cross Media Bar" and abbreviated XMB) includes nine categories of options. These are: Users, Settings, Photo, Music, Video, TV/Video Services, Game, Network, PlayStation Network and Friends (similar to the PlayStation Portable media bar). TheTV/Video Services category is for services like Netflix and if PlayTV or Torne is installed; the first category in this section is "My Channels", which lets users download various streaming services, including Sony's own streaming services Crackle and PlayStation Vue. By default, the What's New section of PlayStation Network is displayed when the system starts up. PS3 includes the ability to store various master and secondary user profiles, manage and explore photos with or without a musical slide show, play music and copy audio CD tracks to an attached data storage device, play movies and video files from the hard disk drive, an optical disc (Blu-ray Disc or DVD-Video) or an optional USB mass storage or Flash card, compatibility for a USB keyboard and mouse and a web browser supporting compatible-file download function. Additionally, UPnP media will appear in the respective audio/video/photo categories if a compatible media server or DLNA server is detected on the local network. The Friends menu allows mail with emoticon and attached picture features and video chat which requires an optional PlayStation Eye or EyeToy webcam. The Network menu allows online shopping through the PlayStation Store and connectivity to PlayStation Portable via Remote Play. PlayStation 3 console protects certain types of data and uses digital rights management to limit the data's use. Purchased games and content from the PlayStation Network store are governed by PlayStation's Network Digital Rights Management (NDRM). The NDRM allows users to access the data from up to 2 different PlayStation 3's that have been activated using a user's PlayStation Network ID. PlayStation 3 also limits the transfer of copy protected videos downloaded from its store to other machines and states that copy protected video "may not restore correctly" following certain actions after making a backup such as downloading a new copy protected movie. Photo Gallery is an optional application to view, create, and group photos from PS3, which is installed separately from the system software at 105 MB. It was introduced in system software version 2.60 and provides a range of tools for sorting through and displaying the system's pictures. The key feature of this application is that it can organize photos into groups according to various criteria. Notable categorizations are colors, ages, or facial expressions of the people in the photos. Slideshows can be viewed with the application, along with music and playlists. The software was updated with the release of system software version 3.40 allowing users to upload and browse photos on Facebook and Picasa. PlayMemories is an optional stereoscopic 3D (and also standard) photo viewing application, which is installed from the PlayStation Store at 956 MB. The application is dedicated specifically to 3D photos and features the ability to zoom into 3D environments and change the angle and perspective of panoramas. It requires system software 3.40 or higher; 3D photos; a 3D HDTV, and an HDMI cable for the 3D images to be viewed properly. A new application was released as part of system software version 3.40 which allows users to edit videos on PlayStation 3 and upload them to the Internet. The software features basic video editing tools including the ability to cut videos and add music and captions. Videos can then be rendered and uploaded to video sharing websites such as Facebook and YouTube. In addition to the video service provided by the Sony Entertainment Network, the PlayStation 3 console has access to a variety of third-party video services, dependent on the region: Since June 2009, VidZone has offered a free music video streaming service in Europe, Australia and New Zealand. In October 2009, Sony Computer Entertainment and Netflix announced that the Netflix streaming service would also be available on PlayStation 3 in the United States. A paid Netflix subscription was required for the service. The service became available in November 2009. Initially users had to use a free Blu-ray disc to access the service; however, in October 2010 the requirement to use a disc to gain access was removed. In April 2010, support for MLB.tv was added, allowing MLB.tv subscribers to watch regular season games live in HD and access new interactive features designed exclusively for PSN. In November 2010, access to the video and social networking site MUBI was enabled for European, New Zealand, and Australian users; the service integrates elements of social networking with rental or subscription video streaming, allowing users to watch and discuss films with other users. Also in November 2010 the video rental service VUDU, NHL GameCenter Live, and subscription service Hulu Plus launched on PlayStation 3 in the United States. In August 2011, Sony, in partnership with DirecTV, added NFL Sunday Ticket. Then in October 2011, Best Buy launched an app for its CinemaNow service. In April 2012, Amazon.com launched an Amazon Video app, accessible to Amazon Prime subscribers (in the US). Upon reviewing the PlayStation and Netflix collaboration, Pocket-Lint said "We've used the Netflix app on Xbox too and, as good as it is, we think the PS3 version might have the edge here." and stated that having Netflix and LoveFilm on PlayStation is "mind-blowingly good." In July 2013, YuppTV OTT player launched its branded application on the PS3 computer entertainment system in the United States. The PlayStation 3 has the ability to play standard audio CDs, a feature that was notably removed from its successors. PlayStation 3 added the ability for ripping audio CDs to store them on the system's hard disk; the system has transcoders for ripping to either MP3, AAC, or Sony's own ATRAC (ATRAC3plus) formats. Early models were also able to playback Super Audio CDs, however this support was dropped in the third generation revision of the console from late 2007. However, all models do retain Direct Stream Digital playback ability. PlayStation 3 can also play music from portable players by connecting the player to the system's USB port, including from Walkman digital audio players and other ATRAC players and other players that use the UMS protocol. The PlayStation 3 did not feature the Sony CONNECT Music Store. On March 1, 2010 (UTC), many of the original PlayStation 3 models worldwide were experiencing errors related to their internal system clock. The error had many symptoms. Initially, the main problem seemed to be the inability to connect to the PlayStation Network. However, the root cause of the problem was unrelated to the PlayStation Network, since even users who had never been online also had problems playing installed offline games (which queried the system timer as part of startup) and using system themes. At the same time, many users noted that the console's clock had gone back to December 31, 1999. The event was nicknamed the ApocalyPS3, a play on the word apocalypse and PS3, the abbreviation for the PlayStation 3 console. The error code displayed was typically 8001050F and affected users were unable to sign in, play games, use dynamic themes, and view/sync trophies. The problem only resided within the first- through third-generation original PS3 units while the newer "Slim" models were unaffected because of different internal hardware for the clock. Sony confirmed that there was an error and stated that it was narrowing down the issue and were continuing to work to restore service. By March 2 (UTC), 2010, owners of original PS3 models could connect to PSN successfully and the clock no longer showed December 31, 1999. Sony stated that the affected models incorrectly identified 2010 as a leap year, because of a bug in the BCD method of storing the date. However, for some users, the hardware's operating system clock (mainly updated from the internet and not associated with the internal clock) needed to be updated manually or by re-syncing it via the internet. On June 29, 2010, Sony released PS3 system software update 3.40, which improved the functionality of the internal clock to properly account for leap years. Features PlayStation Portable can connect with PlayStation 3 in many ways, including in-game connectivity. For example, Formula One Championship Edition, a racing game, was shown at E3 2006 using a PSP as a real-time rear-view mirror. In addition, users are able to download original PlayStation format games from the PlayStation Store, transfer and play them on PSP as well as PS3 itself. It is also possible to use the Remote Play feature to play these and some PlayStation Network games, remotely on PSP over a network or internet connection. Sony has also demonstrated PSP playing back video content from PlayStation 3 hard disk across an ad hoc wireless network. This feature is referred to as Remote Play located under the browser icon on both PlayStation 3 and PlayStation Portable. Remote play has since expanded to allow remote access to PS3 via PSP from any wireless access point in the world. PlayStation Network PlayStation Network is the unified online multiplayer gaming and digital media delivery service provided by Sony Computer Entertainment for PlayStation 3 and PlayStation Portable, announced during the 2006 PlayStation Business Briefing meeting in Tokyo. The service is always connected, free, and includes multiplayer support. The network enables online gaming, the PlayStation Store, PlayStation Home and other services. PlayStation Network uses real currency and PlayStation Network Cards as seen with the PlayStation Store and PlayStation Home. PlayStation Plus (commonly abbreviated PS+ and occasionally referred to as PSN Plus) is a premium PlayStation Network subscription service that was officially unveiled at E3 2010 by Jack Tretton, President and CEO of SCEA. Rumors of such service had been in speculation since Kaz Hirai's announcement at TGS 2009 of a possible paid service for PSN but with the current PSN service still available. Launched alongside PS3 firmware 3.40 and PSP firmware 6.30 on June 29, 2010, the paid-for subscription service provides users with enhanced services on the PlayStation Network, on top of the current PSN service which is still available with all of its features. These enhancements include the ability to have demos and game updates download automatically to PlayStation 3. Subscribers also get early or exclusive access to some betas, game demos, premium downloadable content, and other PlayStation Store items. North American users also get a free subscription to Qore. Users may choose to purchase either a one-year or a three-month subscription to PlayStation Plus. The PlayStation Store is an online virtual market available to users of Sony's PlayStation 3 (PS3) and PlayStation Portable (PSP) game consoles via the PlayStation Network. The Store offers a range of downloadable content both for purchase and available free of charge. Available content includes full games, add-on content, playable demos, themes and game and movie trailers. The service is accessible through an icon on the XMB on PS3 and PSP. The PS3 store can also be accessed on PSP via a Remote Play connection to PS3. The PSP store is also available via the PC application, Media Go. As of September 24, 2009[update], there have been over 600 million downloads from the PlayStation Store worldwide. The PlayStation Store is updated with new content each Tuesday in North America, and each Wednesday in PAL regions. In May 2010 this was changed from Thursdays to allow PSP games to be released digitally, closer to the time they are released on UMD. On March 29, 2021, Sony announced that it would shut down the PS3 version of the Store on July 2, though previous purchases on the store will remain downloadable. However, on April 19, following fan feedback, Sony reversed their decision and confirmed that the PS3 store would remain operational. What's New was announced at Gamescom 2009 and was released on September 1, 2009, with PlayStation 3 system software 3.0. The feature was to replace the existing [Information Board], which displayed news from the PlayStation website associated with the user's region. The concept was developed further into a major PlayStation Network feature, which interacts with the [Status Indicator] to display a ticker of all content, excluding recently played content (currently in North America and Japan only).[citation needed] The system displays the What's New screen by default instead of the [Games] menu (or [Video] menu, if a movie was inserted) when starting up. What's New has four sections: "Our Pick", "Recently Played", the latest information, and new content available in PlayStation Store. There are four kinds of content the What's New screen displays and links to, on the sections. "Recently Played" displays the user's recently played games and online services only, whereas, the other sections can contain website links, links to play videos, and access to selected sections of the PlayStation Store.[citation needed] The PlayStation Store icons in the [Game] and [Video] section act similarly to the What's New screen, except that they only display and link to games and videos in the PlayStation Store, respectively.[citation needed] PlayStation Home was a virtual 3D social networking service for the PlayStation Network. Home allowed users to create a custom avatar, which could be groomed realistically. Users could edit and decorate their personal apartments, avatars, or club houses with free, premium, or won content. Users could shop for new items or win prizes from PS3 games, or Home activities. Users could interact and connect with friends and customize content in a virtual world. Home also acted as a meeting place for users that wanted to play multiplayer video games with others. A closed beta began in Europe from May 2007 and expanded to other territories soon after. Home was delayed and expanded several times before initially releasing. The Open Beta test was started on December 11, 2008. It remained as a perpetual beta until its closure on March 31, 2015. Home was available directly from the PlayStation 3 XrossMediaBar. Membership was free, but required a PSN account. Home featured places to meet and interact, dedicated game spaces, developer spaces, company spaces, and events. The service underwent a weekly maintenance and frequent updates. At the time of its closure in March 2015, Home had been downloaded by over 41 million users. Life with PlayStation, released on September 18, 2008 to succeed Folding@home, was retired November 6, 2012. Life with PlayStation used virtual globe data to display news and information by city. Along with Folding@home functionality, the application provided access to three other information "channels", the first being the Live Channel offering news headlines and weather which were provided by Google News, The Weather Channel, the University of Wisconsin–Madison Space Science and Engineering Center, among other sources. The second channel was the World Heritage channel which offered historical information about historical sites. The third channel was the United Village channel. United Village was designed to share information about communities and cultures worldwide. An update allowed video and photo viewing in the application. The fourth channel was the U.S. exclusive PlayStation Network Game Trailers Channel for direct streaming of game trailers. On April 20, 2011, Sony shut down the PlayStation Network and Qriocity for a prolonged interval, revealing on April 23 that this was due to "an external intrusion on our system". Sony later revealed that the personal information of 77 million users might have been taken, including: names; addresses; countries; email addresses; birthdates; PSN/Qriocity logins, passwords and handles/PSN online IDs. It also stated that it was possible that users' profile data, including purchase history and billing address, and PlayStation Network/Qriocity password security answers may have been obtained. There was no evidence that any credit card data had been taken, but the possibility could not be ruled out, and Sony advised customers that their credit card data may have been obtained. Additionally, the credit card numbers were encrypted and Sony never collected the three digit CVC or CSC number from the back of the credit cards which is required for authenticating some transactions. In response to the incident, Sony announced a "Welcome Back" program, 30 days free membership of PlayStation Plus for all PSN members, two free downloadable PS3 games, and a free one-year enrollment in an identity theft protection program. Sales and production costs Although its PlayStation predecessors had been very dominant against the competition and were hugely profitable for Sony, PlayStation 3 had an inauspicious start, and Sony chairman and CEO Sir Howard Stringer initially could not convince investors of a turnaround in its fortunes. The PS3 lacked the unique gameplay of the more affordable Wii which became that generation's most successful console in terms of units sold. Furthermore, PS3 had to compete directly with Xbox 360 which had a market head start, and as a result the platform no longer had exclusive titles that the PS2 enjoyed such as the Grand Theft Auto and Final Fantasy series (regarding cross-platform games, Xbox 360 versions were generally considered superior in 2006, although by 2008 the PS3 versions had reached parity or surpassed), and it took longer than expected for PS3 to enjoy strong sales and close the gap with Xbox 360. Sony also continued to lose money on each PS3 sold through 2010, although the redesigned "slim" PS3 cut these losses. PlayStation 3's initial production cost is estimated by iSuppli to have been US$805.85 for the 20 GB model and US$840.35 for the 60 GB model. However, they were priced at US$499 and US$599, respectively, meaning that units may have been sold at an estimated loss of $306 or $241 depending on model, if the cost estimates were correct, and thus may have contributed to Sony's games division posting an operating loss of ¥232.3 billion (US$1.97 billion) in the fiscal year ending March 2007. In April 2007, soon after these results were published, Ken Kutaragi, President of Sony Computer Entertainment, announced plans to retire. Various news agencies, including The Times and The Wall Street Journal reported that this was due to poor sales, while SCEI maintains that Kutaragi had been planning his retirement for six months prior to the announcement. In January 2008, Kaz Hirai, CEO of Sony Computer Entertainment, suggested that the console may start making a profit by early 2009, stating that, "the next fiscal year starts in April and if we can try to achieve that in the next fiscal year that would be a great thing" and that "[profitability] is not a definite commitment, but that is what I would like to try to shoot for". However, market analysts Nikko Citigroup have predicted that PlayStation 3 could be profitable by August 2008. In a July 2008 interview, Hirai stated that his objective is for PlayStation 3 to sell 150 million units by its ninth year, surpassing PlayStation 2's sales of 140 million in its nine years on the market. In January 2009 Sony announced that their gaming division was profitable in Q3 2008. After the system's launch, production costs were reduced significantly as a result of phasing out the Emotion Engine chip and falling hardware costs. The cost of manufacturing Cell microprocessors had fallen dramatically as a result of moving to the 65 nm production process, and Blu-ray Disc diodes had become cheaper to manufacture. As of January 2008, each unit cost around $400 to manufacture; by August 2009, Sony had reduced costs by a total of 70%, meaning it only cost Sony around $240 per unit. The PlayStation 3's actual manufacturing cost at launch was never officially disclosed; SCE's Phil Harrison said in a 2019 interview that during the system's launch "it was a worry because 600 bucks was actually too cheap, because the machine was so expensive to make", before telling the interviewer that he can't disclose the real figure but that it would "make your eyebrows shoot clear off the top of your head". Critical reception Early PlayStation 3 reviews after launch were critical of its high price and lack of quality games. Game developers regarded the architecture as difficult to program for. PS3 was, however, commended for its hardware including its Blu-ray home theater capabilities and graphics potential. Critical and commercial reception to PS3 improved over time, after a series of price revisions, Blu-ray's victory over HD DVD, and the release of several well received titles. Ars Technica's original launch review gave PS3 only a 6/10, but second review of the console in June 2008 rated it a 9/10. In September 2009, IGN named PlayStation 3 the 15th-best gaming console of all time, behind both of its competitors: Wii (10th) and Xbox 360 (6th). However, PS3 has won IGN's "Console Showdown"—based on which console offers the best selection of games released during each year—in three of the four years since it began (2008, 2009 and 2011, with Xbox winning in 2010). IGN judged PlayStation 3 to have the best game line-up of 2008, based on their review scores in comparison to those of Wii and Xbox 360. In a comparison piece by PC Magazine's Will Greenwald in June 2012, PS3 was selected as an overall better console compared to Xbox 360. Pocket-Lint said of the console "The PS3 has always been a brilliant games console," and that "For now, this is just about the best media device for the money." PS3 was given the number-eight spot on PC World magazine's list of "The Top 21 Tech Screwups of 2006", where it was criticized for being "Late, Expensive and Incompatible". GamesRadar ranked PS3 as the top item in a feature on game-related PR disasters, asking how Sony managed to "take one of the most anticipated game systems of all time and—within the space of a year—turn it into a hate object reviled by the entire internet", but added that despite its problems the system has "untapped potential". Business Week summed up the general opinion by stating that it was "more impressed with what the PlayStation 3 could do than with what it currently does". Developers also found the machine difficult to program for. In 2007, Gabe Newell of Valve said "The PS3 is a total disaster on so many levels, I think it's really clear that Sony lost track of what customers and what developers wanted". He continued "I'd say, even at this late date, they should just cancel it and do a do over. Just say, 'This was a horrible disaster and we're sorry and we're going to stop selling this and stop trying to convince people to develop for it'". Doug Lombardi VP of Marketing for Valve has since stated that Valve is interested in developing for the console and is looking to hire talented PS3 programmers for future projects. He later restated Valve's position, "Until we have the ability to get a PS3 team together, until we find the people who want to come to Valve or who are at Valve who want to work on that, I don't really see us moving to that platform". At Sony's E3 2010 press conference, Newell made a live appearance to recant his previous statements, citing Sony's move to make the system more developer-friendly, and to announce that Valve would be developing Portal 2 for the system. He also claimed that the inclusion of Steamworks (Valve's system to automatically update their software independently) would help to make the PS3 version of Portal 2 the best console version on the market. Activision Blizzard CEO Bobby Kotick has criticized PS3's high development costs and inferior attach rate and return to that of Xbox 360 and Wii. He believes these factors are pushing developers away from working on the console. In an interview with The Times Kotick stated "I'm getting concerned about Sony; the PlayStation 3 is losing a bit of momentum and they don't make it easy for me to support the platform." He continued, "It's expensive to develop for the console, and the Wii and the Xbox are just selling better. Games generate a better return on invested capital (ROIC) on the Xbox than on the PlayStation." Kotick also claimed that Activision Blizzard may stop supporting the system if the situation is not addressed. "[Sony has] to cut the [PS3's retail] price, because if they don't, the attach rates are likely to slow. If we are being realistic, we might have to stop supporting Sony." Kotick received heavy criticism for the statement, notably from developer BioWare who questioned the wisdom of the threatened move, and referred to the statement as "silly." Despite the initial negative press, several websites have given the system very good reviews mostly regarding its hardware. CNET United Kingdom praised the system saying, "the PS3 is a versatile and impressive piece of home-entertainment equipment that lives up to the hype [...] the PS3 is well worth its hefty price tag." CNET awarded it a score of 8.8 out of 10 and voted it as its number one "must-have" gadget, praising its robust graphical capabilities and stylish exterior design while criticizing its limited selection of available games. In addition, both Home Theater Magazine and Ultimate AV have given the system's Blu-ray playback very favorable reviews, stating that the quality of playback exceeds that of many current standalone Blu-ray Disc players. In an interview, Kazuo Hirai, chairman of Sony Computer Entertainment argued for the choice of a complex architecture. Hexus Gaming reviewed the PAL version and summed the review up by saying, "as the PlayStation 3 matures and developers start really pushing it, we'll see the PlayStation 3 emerge as the console of choice for gaming." At GDC 2007, Shiny Entertainment founder Dave Perry stated, "I think that Sony has made the best machine. It's the best piece of hardware, without question". The PlayStation 3 Slim received extremely positive reviews as well as a boost in sales; less than 24 hours after its announcement, PS3 Slim took the number-one bestseller spot on Amazon.com in the video games section for fifteen consecutive days. It regained the number-one position again one day later. PS3 Slim also received praise from PC World giving it a 90 out of 100 praising its new repackaging and the new value it brings at a lower price as well as praising its quietness and the reduction in its power consumption. This is in stark contrast to the original PS3's launch in which it was given position number-eight on their "The Top 21 Tech Screwups of 2006" list. CNET awarded PS3 Slim four out of five stars praising its Blu-ray capabilities, 120 GB hard drive, free online gaming service and more affordable pricing point, but complained about the lack of backward compatibility for PlayStation 2 games. TechRadar gave PS3 Slim four and a half stars out of five praising its new smaller size and summed up its review stating "Over all, the PS3 Slim is a phenomenal piece of kit. It's amazing that something so small can do so much". However, they criticized the exterior design and the build quality in relation to the original model. Eurogamer called it "a product where the cost-cutting has—by and large—been tastefully done" and said "It's nothing short of a massive win for Sony." The Super Slim model of PS3 has received positive reviews. Gaming website Spong praised the new Super Slim's quietness, stating "The most noticeable noise comes when the drive seeks a new area of the disc, such as when starting to load a game, and this occurs infrequently." They added that the fans are quieter than those of Slim, and went on to praise the new smaller, lighter size. Criticism was placed on the new disc loader, stating: "The cover can be moved by hand if you wish, there's also an eject button to do the work for you, but there is no software eject from the triangle button menus in the Xross Media Bar (XMB) interface. In addition, you have to close the cover by hand, which can be a bit fiddly if it's upright, and the PS3 won't start reading a disc unless you do [close the cover]." They also said there is no real drop in retail price. Tech media website CNET gave new Super Slim 4 out of 5 stars ("Excellent"), saying "The Super Slim PlayStation 3 shrinks a powerful gaming machine into an even tinier package while maintaining the same features as its predecessors: a great gaming library and a strong array of streaming services [...]", whilst also criticising the "cheap" design and disc-loader, stating: "Sometimes [the cover] doesn't catch and you feel like you're using one of those old credit card imprinter machines. In short, it feels cheap. You don't realize how convenient autoloading disc trays are until they're gone. Whether it was to cut costs or save space, this move is ultimately a step back." The criticism also was due to price, stating the cheapest Super Slim model was still more expensive than the cheapest Slim model, and that the smaller size and bigger hard drive should not be considered an upgrade when the hard drive on a Slim model is easily removed and replaced. They did praise that the hard drive of the Super Slim model is "the easiest yet. Simply sliding off the side panel reveals the drive bay, which can quickly be unscrewed." They also stated that whilst the Super Slim model is not in any way an upgrade, it could be an indicator as to what's to come. "It may not be revolutionary, but the Super Slim PS3 is the same impressive machine in a much smaller package. There doesn't seem to be any reason for existing PS3 owners to upgrade, but for the prospective PS3 buyer, the Super Slim is probably the way to go if you can deal with not having a slot-loading disc drive." Pocket-Lint gave Super Slim a very positive review saying "It's much more affordable, brilliant gaming, second-to-none video and media player." They think it is "A blinding good console and one that will serve you for years to come with second-hand games and even new releases. Without doubt, if you don't have a PS3, this is the time to buy." They gave Super Slim 4+1⁄2 stars out of 5. Technology magazine T3 gave the Super Slim model a positive review, stating the console is almost "nostalgic" in the design similarities to the original "fat" model, "While we don't know whether it will play PS3 games or Blu-ray discs any differently yet, the look and feel of the new PS3 Slim is an obvious homage to the original PS3, minus the considerable excess weight. Immediately we would be concerned about the durability of the top loading tray that feels like it could be yanked straight out off the console, but ultimately it all feels like Sony's nostalgic way of signing off the current-generation console in anticipation for the PS4." Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_4] | [TOKENS: 7657] |
Contents PlayStation 4 The PlayStation 4 (PS4) is a home video game console developed by Sony Interactive Entertainment. Announced as the successor to the PlayStation 3 in February 2013, it was launched on November 15, 2013, in North America, November 29, 2013, in Europe, South America, and Australia, and on February 22, 2014, in Japan. A console of the eighth generation, it competes with Microsoft's Xbox One and Nintendo's Wii U and Switch. Moving away from the more complex Cell microarchitecture of its predecessor, the console features an APU from AMD built upon the x86-64 architecture, which can theoretically peak at 1.84 teraflops; AMD stated that it was the "most powerful" APU it had developed to date. The PlayStation 4 places an increased emphasis on social interaction and integration with other devices and services, including the ability to play games off-console on PlayStation Vita and other supported devices ("Remote Play"), the ability to stream gameplay online or to friends, with them controlling gameplay remotely ("Share Play"). The console's controller was also redesigned and improved over the PlayStation 3, with updated buttons and analog sticks, and an integrated touchpad among other changes. The console also supports HDR10 high-dynamic-range video and playback of 4K resolution multimedia. The PlayStation 4 was released to critical acclaim, with critics praising Sony for acknowledging its consumers' needs, embracing independent game development, and for not imposing the restrictive digital rights management schemes like those originally announced by Microsoft for the Xbox One. Critics and third-party studios, before its launch, also praised the capabilities of the PlayStation 4 in comparison to its competitors. Heightened demand also helped Sony top global console sales. In September 2016, the console was refreshed with a new, smaller revision, popularly referred to as the "Slim" model, as well as a high-end version called the PlayStation 4 Pro, which features an upgraded GPU and a higher CPU clock rate to support enhanced performance and 4K resolution in supported games. By October 2019, PS4 had become the second best-selling PlayStation console of all time, behind the PlayStation 2. Its successor, the PlayStation 5, was released in November 2020; the PS4 continues to be produced as of 2025.[failed verification] History According to lead architect Mark Cerny, the development of Sony's fourth video game console began as early as 2008. Less than two years earlier, the PlayStation 3 had been launched after months of delays due to issues with production. The delay placed Sony almost a year behind Microsoft's Xbox 360, which was already approaching unit sales of 10 million by the time the PS3 launched. Sony Computer Entertainment Europe CEO Jim Ryan said Sony wanted to avoid repeating the same mistake with PS3's successor. In designing the system, Sony worked with software developer Bungie, who offered their input on the controller and how to make it better for shooting games. In 2012, Sony began shipping development kits to game developers, consisting of a modified PC running the AMD Accelerated Processing Unit chipset. These development kits were known as "Orbis". In early 2013, Sony announced that an event known as PlayStation Meeting 2013 would be held in New York City, U.S., on February 20, 2013, to cover the "future of PlayStation". Sony officially announced the PlayStation 4 at the event. It revealed details about the console's hardware and discussed some of the new features it would introduce. Sony also showed off real-time footage of games in development, as well as some technical demonstrations. The design of the console was unveiled in June at E3 2013, and the initial recommended retail prices of $399 (NA), €399 (Europe), and £349 (UK) given. Sony took advantage of problems that Microsoft had been having with their positioning of their newly announced Xbox One, which included its higher price point ($499 in North America), as well as strict regulations on how users could share game media. Besides its lower price point, Sony focused on the ease one would have in sharing media with the PS4. The company revealed release dates for North America, Central America, South America, Europe, and Australia, as well as final pieces of information, at a Gamescom press event in Cologne, Germany, on August 20, 2013. The console was released on November 15, 2013, in the United States and Canada, followed by further releases on November 29, 2013. By the end of 2013, the PS4 was launched in more European, Asian and South American countries. The PS4 was released in Japan at ¥39,980 on February 22, 2014. Sony finalized a deal with the Chinese government in May 2014 to sell its products in mainland China, and the PS4 was the first product to be released. Kazuo Hirai, chief executive officer of Sony, said in May: "The Chinese market, just given the size of it, is obviously potentially a very large market for video game products ... I think that we will be able to replicate the kind of success we have had with PS4 in other parts of the world in China." In September 2015, Sony reduced the price of the PS4 in Japan to ¥34,980, with similar price drops in other Southeast Asian markets. The first official sub-£300 PS4 bundle was the £299.99 "Uncharted Nathan Drake Collection 500GB", and was released in the UK on October 9, 2015; a 1 TB £329.99 version was offered at the same time. On October 9, 2015, the first official price cut of the PS4 in North America was announced: a reduction of $50 to $349.99 (US) and by $20 to $429.99 (Canada). An official price cut in Europe followed in late October 2015, reduced to €349.99/£299.99. On June 10, 2016, Sony confirmed that a hardware revision of the PlayStation 4, rumored to be codenamed "Neo", was under development. The new revision was revealed to be a higher-end model meant to support gameplay in 4K. This new model was sold alongside the existing model, and all existing software was compatible between the two models. Layden stated that Sony has no plans to "bifurcate the market", only that gamers playing on the Neo will "have the same experience, but one will be delivered at a higher resolution, with an enhanced graphical experience, but everything else is going to be exactly as you'd expect". The high-end console was publicly revealed on September 7, 2016, as PlayStation 4 Pro. At the same time, Sony unveiled an updated version of the original PS4 model with a smaller form factor. In May 2018, during a presentation to investors, Sony Interactive Entertainment CEO John Kodera stated that the PlayStation 4 was heading into the end of its lifecycle and that the company was anticipating decreasing year-over-year hardware sales. He explained that Sony would be countering the expected decline by focusing on "strengthen[ing] user engagement" including continued investments into new first-party games and other online services for PS4. "We will use the next three years to prepare the next step, to crouch down so that we can jump higher in the future," Kodera added in an interview with the press the following day. Following the launch of the PlayStation 5 in November 2020, Sony discontinued production in Japan of all but the 500 GB Slim model of the PlayStation 4 on January 5, 2021, with the standard PS4 and PS4 Pro still being produced for western markets. According to a report from Bloomberg News in January 2022, Sony had been poised to discontinue the PlayStation 4 at the end of 2021 in favor of the PlayStation 5, but due to a global chip shortage that lasted from 2020 to 2023, this made it difficult for Sony to keep up with PlayStation 5 demand. Instead, the company planned to continue PlayStation 4 production; besides helping to offset the PlayStation 5 shortage, this production method would help assure deals with its component providers for the PlayStation 5. Hardware The technology in the PlayStation 4 is similar to the hardware found in modern personal computers. This familiarity is designed to make it easier and less expensive for game studios to develop games for the PS4. "[We] have not built an APU quite like that for anyone else in the market. It is by far the most powerful APU we have built to date". (February 2013) The PlayStation 4 uses an Accelerated Processing Unit (APU) developed by AMD in cooperation with Sony. It combines a central processing unit (CPU) and graphics processing unit (GPU), as well as other components such as a memory controller and video decoder. The CPU consists of two 28 nm quad-core Jaguar modules totaling 8 64-bit x86-64 cores, 7 of which are available for game developers to use. The GPU consists of 18 compute units to produce a theoretical peak performance of 1.84 TFLOPS. The system's GDDR5 memory is capable of running at a maximum clock frequency of 2.75 GHz (5500 MT/s) and has a maximum memory bandwidth of 176 GB/s. The console contains 8 GB of GDDR5 memory, 16 times the amount of RAM found in the PS3 and is expected to give the console considerable longevity. It also includes secondary custom chips that handle tasks associated with downloading, uploading, and social gameplay. These tasks can be handled in the background during gameplay or while the system is in sleep mode. The console also contains an audio module, which can support in-game chat as well as "a very large number" of audio streams for use in-game. All PlayStation 4 models support high dynamic range (HDR) color profiles. Its read-only optical drive is capable of reading Blu-ray Discs at speeds of up to three times that of its predecessor. The console features a hardware on-the-fly zlib decompression module. The original PS4 model supports up to 1080p and 1080i video standards, while the Pro model supports 4K resolution. The console includes a 500 gigabyte hard drive for additional storage, which can be upgraded by the user. System Software 4.50, which was released on March 9, 2017, enabled the use of external USB hard drives up to 8 TB for additional storage. The PlayStation 4 features Wi-Fi and Ethernet connectivity, Bluetooth, and two USB 3.0 ports. An auxiliary port is also included for connection to the PlayStation Camera, a motion detection digital camera device first introduced on the PS3. A mono headset, which can be plugged into the DualShock 4, is bundled with the system. Audio/video output options include HDMI TV and optical S/PDIF audio. The console does not have an analog audio/video output. The PS4 features a "Rest mode" feature. This places the console in a low-power state while allowing users to immediately resume their game or app once the console is awoken. The console also is able to download content such as game and OS updates while it is in this state. The DualShock 4 is PlayStation 4's primary controller; it maintains a similar design to previous iterations of the DualShock series, but with additional features and design refinements. Among other tweaks, the caps of the analog sticks were given a concave design (similar to the Xbox 360 controller), the shape of the triggers and shoulder buttons was refined, the D-pad buttons were given a steeper downward angle to provide a resting space in the center for the user's thumb, and the hand grips were made thicker and given microtexturing to improve their feel. A major addition to the DualShock 4 is a touchpad; it is capable of detecting up to two simultaneous touch presses, and can also be pressed down as a button. The "Start" and "Select" buttons were replaced by "Options" and "Share" buttons; the latter is designed to allow access to the PlayStation 4's social features (including streaming, video recording, and screenshot tools). The DualShock 4 is powered by a non-removable, rechargeable lithium-ion battery, which can be charged using its micro USB connector. The controller also features an internal speaker, and a headphone jack for headsets or headphones; the console is bundled with a pair of headset earbuds. The controller's motion tracking system is more sensitive than those of the PlayStation 3's controllers. An LED "light bar" was additionally added to the front of the controller; it is designed to allow the PlayStation Camera accessory to further track its motion, but can also be used to provide visual effects and feedback within games (such as, for instance, reflecting a player's low health by turning red). Although the PS4 and DualShock 4 continue to use Bluetooth for wireless connectivity, the console is incompatible with PlayStation 3 controllers. An exception are the PlayStation Move motion controllers originally released for PS3, which are officially supported for use with the PlayStation Camera. In October 2013, Shuhei Yoshida stated on Twitter that the DualShock 4 would support "basic functions" when attached to a PC. In August 2016, Sony unveiled an official USB wireless adapter for the DualShock 4, enabling use of all of the controller's functionality on PC. In December 2016, Valve's Steam platform was updated to provide support and controller customization functionality for DualShock 4, through existing APIs for the Steam Controller. A revision of the DualShock 4 was released alongside the "Slim" and Pro models in 2016, and is bundled with these systems. It is largely identical to the original model, except that the touchpad now contains a "stripe" along the top which the light bar's LED can shine through, and the controller can communicate non-wirelessly when connected to the console over USB. The PlayStation Camera is an optional motion sensor and camera for the PlayStation 4, similar to Kinect on Xbox. It includes two 1280×800 pixel lenses operating with an aperture of f/2.0, with 30 cm focusing distance, and an 85° field of view. The dual camera setup allows for different modes of operation, depending on the initiated and running application. The two cameras can be used together for depth-sensing of its surrounding objects in its field of vision. Alternatively, one of the cameras can be used for generating the video image, with the other used for motion tracking. PlayStation Camera also features a four-channel microphone array, which helps reduce unwanted background noise and can be used for voice commands. With the PlayStation Camera connected, different users can automatically log-on to the system via face detection. PlayStation VR is a virtual reality system for PlayStation 4; it consists of a headset, which features a 1080p display panel, LED lights on the headset that are used by PlayStation Camera to track its motion, and a control box that processes 3D audio effects, as well as video output to the external display (either simulcasting the player's VR perspective, or providing an asymmetrical secondary perspective). PlayStation VR can also be used with PlayStation Move motion controllers. Software and services The PlayStation 4's operating system is called "Orbis OS", based upon a customized FreeBSD 9. The console does not require an Internet connection for usage, although more functionality is available when connected. The console introduces a customizable menu interface, the "PlayStation Dynamic Menu", featuring a variety of color schemes. The interface displays the player's profile, recent activity, notifications, and other details in addition to unlocked trophies. It allows multiple user accounts, all with their own pass-codes. Each player account has the option to share their real name with friends, or use a nickname in other situations when anonymity is important. Facebook profiles can be connected to PlayStation Network accounts, making it easier to recognize friends. The default home screen features real time content from friends. The "What's New" activity feed includes shared media, recently played games, and other notifications. Services from third-party vendors, such as Netflix and Amazon Prime Video, can be accessed within the interface. Multitasking is available during gameplay, such as opening the browser or managing party chat, and switching between applications is done by double-tapping the "PS" button. The PlayStation Camera or a microphone enables the user to control the system using voice input. Players can command the interface to start a game, take screenshots, and save videos. Saying "PlayStation" initiates voice control, and "All Commands" displays a list of possible commands. The PlayStation 4 supports Blu-ray and DVD playback, including 3D Blu-ray. The playing of CD is no longer supported, as the console no longer has an infrared 780 nm laser. Custom music and video files can still be played from USB drives and DLNA servers using the Media Player app. The PlayStation 4 allows users to access a variety of free and premium PlayStation Network (PSN) services, including the PlayStation Store, PlayStation Plus subscription service, PlayStation Music powered by Spotify, and the PlayStation Video subscription service, which allows owners to rent or buy TV shows and films à la carte. A United States-exclusive cloud-based television-on-demand service known as PlayStation Vue began beta testing in late November 2014. Sony intends to expand and evolve the services it offers over the console's lifespan. Unlike PS3, a PlayStation Plus membership is required to access multiplayer in most games; this requirement does not apply to free-to-play or subscription-based games. Smartphones and tablets can interact with the PlayStation 4 as second screen devices, and can also wake the console from sleep mode. A Sony Xperia smartphone, tablet or the PlayStation Vita can be used for streaming gameplay from the console to handheld, allowing supported games to be played remotely from around a household or away from home. Sony has ambitions to make all PS4 games playable on PlayStation Vita. Developers can add Vita-specific controls for use via Remote Play. This feature was later expanded to enable PS4 Remote Play functionality on Microsoft Windows PCs and on Apple OS X Macs. The update, released in April 2016, allows for Remote Play functionality on computers running Windows 8.1, Windows 10, OS X Yosemite, and OS X El Capitan. Remote Play supports resolution options of 360p, 540p, and 720p (1080p is available on PS4 Pro), frame rate options of 30–60 FPS, and the DualShock 4 can be connected via USB. The PlayStation App allows iOS and Android mobile devices to interact with the PlayStation 4 from their device. The user can use this application to purchase PS4 games from the console and have them remotely downloaded, watch live streams of other gamers and view in-game maps while playing games. Social features "Ustream's integration within PS4 consoles will put gamers on a new media field. They will have the ability to direct, produce, and star in their own video game production, simply by being an awesome (or not so awesome!) gamer." Sony focused on "social" aspects as a major feature of the console. Although the PS4 has improved social functionality, the features are optional and can be disabled. Users have the option to create or join community groups based on personal interest. Communities include a discussion board, accomplishments and game clips shared by other members, plus the ability to join group chat and launch cooperative games. Sony stated that "communities are a good way to socialize with like-minded players", particularly when "you want to tackle a big multiplayer raid, but don't have enough friends available." Sony has officially stated that starting April 2021, the community system of the PlayStation Network will be discontinued. This, however, will not prevent users from communicating with their friends in private messaging or in group chats on the PlayStation Network. The DualShock 4 controller includes a "SHARE" button, allowing the player to cycle through the last 60 minutes of recorded gameplay to select a screenshot or video clip appropriate for sharing. Media is uploaded seamlessly from the console to other PSN users or social networking sites such as Dailymotion, Facebook, Twitter and YouTube, or else users can copy media to a USB flash drive and upload to a social network or website of their preference. Players can also use a free video editing application named ShareFactory to cut and assemble their favorite video clips and add custom music or voice commentary with green screen effects. Subsequent updates have added options for picture-in-picture layouts, the ability to create photo collages and animated GIFs. Gamers can either watch live gameplay of games which their friends are playing through the PS4 interface with cross-game camera and microphone input, spectate silently, or broadcast their own gameplay live via DailyMotion, Twitch, Ustream, Niconico, or YouTube Gaming, allowing for friends and members of the public to view and comment upon them from other web browsers and devices. If a user is not screen-casting, a friend can send them a "Request to Watch" notification. Share Play allows users to invite an online friend to join their play session via streaming, even if they do not own a copy of the game. Users can pass control of the game entirely to the remote user or partake in cooperative multiplayer as if they were physically present. Mark Cerny says that remote assistance is particularly useful when confronted by a potentially game-defeating obstacle. "You can even see that your friend is in trouble and reach out through the network to take over the controller and assist them through some difficult portion of the game," he said. Share Play requires a PlayStation Plus subscription and can only be used for one hour at a time. Games Each PlayStation 4 console comes preinstalled with The Playroom, a game designed to server as demonstration of the DualShock 4 controller and the PlayStation Camera. The PlayStation Camera accessory is required to play The Playroom. If a camera is not present, a trailer for The Playroom will be displayed instead of the full game. PlayStation 4 games are distributed at retail on Blu-ray Disc, and digitally as downloads through the PlayStation Store. Games are not region-locked, so games purchased in one region can be played on consoles in all regions, and players can sign-on to any PS4 console to access their entire digital game library. All PlayStation 4 games must be installed to the console's storage. Additionally a system called "PlayGo" allows users to begin to play portions of a game (such as opening levels) once the installation or download reaches a specific point, while the remainder of the game is downloaded or installed in the background. Updates to games and system software are also downloaded in the background and while in standby. PS4 users will, in the future, be able to browse games and stream games via Gaikai to demo them almost instantaneously. Sony says it is committed to releasing an ever-increasing number of free-to-play games, including PlanetSide 2 and War Thunder. Sony also took steps to make it easier for independent game developers to release games for the PS4 by giving them the option to self-publish their own games rather than rely upon others to distribute their games. PlayStation 4 is not compatible with any disc of older PlayStation consoles. Emulated versions of selected PlayStation, PlayStation 2 and PlayStation Portable games are available for purchase via PlayStation Store, which are upscaled to high definition and have support for PS4 social features. In December 2013, Andrew House indicated that Sony was planning to launch a cloud gaming service for the PS4 in North America within the third quarter of 2014, with a European launch to follow in 2015. At Consumer Electronics Show on January 7, 2014, Sony unveiled PlayStation Now, a digital distribution service which will initially allow users to access PlayStation 3 games on the PS4 via a cloud-based streaming system, purchasing games individually or via a subscription, as a solution of no backward compatibility on the hardware of the console. The United States Open Beta went live on July 31, 2014. The official United States release of the service was on January 13, 2015. As of March 2015[update] PlayStation Now was in closed beta in the United Kingdom. At E3 2017, Sony revealed the "PlayLink" line of games, which let players control the game with their mobile devices and PlayLink companion apps. The apps would release on November 21 that same year and would include games such as Knowledge is Power, That's You, Hidden Agenda, SingStar Celebration, and Planet of the Apes: Last Frontier. On November 14, 2018, more games would be released, including Just Deal With It, Chimparty, WordHunters, UNO, Melbits World, Ticket To Ride, and Knowledge is Power: Decades. In 2019, the delayed release of Erica made no mention of the PlayLink initiative, when it was planned with PlayLink functionality at the 2017 Paris Games Week event. It would be confirmed by Sony in 2020 that PS4 PlayLink titles would be backwards-compatible with the PlayStation 5. Since 2021, many of the PlayLink companion apps have been delisted from the Apple App Store and Google Play, such as Uno.[citation needed] On December 14, 2023, companion apps for Chimparty, Frantics, Hidden Agenda, Knowledge is Power, Knowledge is Power: Decades, and That's You were no longer downloadable for new Google Play users with devices above Android 9 or 11 due to compatibility issues, with iOS users being unaffected. There are other PlayLink applications that have been published outside of PlayStation, including by Ubisoft for Battleship. Release "It's abundantly clear that PS4 is being driven as a collaboration between East and West, as opposed to a dictation from one side to the other. Developers are fully involved, activated, discussed and doing really cool collaborative things." Pre-release reception to the console from developers and journalists was positive. Mark Rein of Epic Games praised the "enhanced" architecture of Sony's system, describing it as "a phenomenal piece of hardware". John Carmack, programmer and co-founder of id Software, also commended the design by saying "Sony made wise engineering choices", while Randy Pitchford of Gearbox Software expressed satisfaction with the amount of high-speed memory in the console. Eurogamer also called the graphics technology in the PS4 "impressive" and an improvement from the difficulties developers experienced on the PlayStation 3. Numerous industry professionals have acknowledged the PlayStation 4's performance advantage over the Xbox One. Speaking to Edge magazine, multiple game developers have described the difference as "significant" and "obvious". ExtremeTech says the PS4's graphics processing unit offers a "serious advantage" over the competition, but due to the nature of cross-platform development, games that share the same assets will appear "very similar". In other scenarios, designers may tap some of PS4's additional power in a straightforward manner, to boost frame rate or output at a higher resolution, whereas games from Sony's own first-party studios that take full advantage of the hardware "will probably look significantly better than anything on the Xbox One." In response to concerns surrounding the possibility of DRM measures to hinder the resale of used games (and in particular, the initial DRM policies of Xbox One, which did contain such restrictions), Jack Tretton explicitly stated during Sony's E3 press conference that there would be "no restrictions" on the resale and trading of PS4 games on physical media, while software product development head Scott Rohde specified that Sony was planning to disallow online passes as well, going on to say that the policies were designed to be "consumer-friendly, extremely retailer-friendly, and extremely publisher-friendly". After Sony's E3 2013 press conference, IGN responded positively to Sony's attitude towards indie developers and trading games, stating they thought "most gamers would agree" that "if you care about games like [Sony] do, you'll buy a PlayStation 4". PlayStation 4's removable and upgradable hard drive also drew praise from IGN, with Scott Lowe commenting that the decision gave the console "another advantage" over the Xbox One, whose hard drive cannot be accessed. GameSpot called the PlayStation 4 "the gamer's choice for next-generation", citing its price, lack of restrictive digital rights management, and most importantly, Sony's efforts to "acknowledge its consumers" and "respect its audience" as major factors. The PlayStation 4 has received very positive reviews by critics. Scott Lowe of IGN gave it an 8.2 rating out of 10 praising the console's DualShock 4 design and social integration features. He criticized the console's lack of software features and for underutilizing the DualShock 4's touch pad. The Gadget Show gave a similar review complimenting the DualShock 4's new triggers and control sticks, in addition to the new Remote Play feature, yet criticized the system's lack of media support at launch. IGN compared the Xbox One and the PlayStation 4 over various categories, allowing their readers to vote for their preferred system. The PS4 won every category offered, and IGN awarded the PS4 with their People's Choice Award. Shortly following the launch, it became apparent that some games released on multiple platforms were available in higher resolutions on the PS4 as opposed to other video game consoles. Kirk Hamilton of Kotaku reported on the differences in early games such as Call of Duty: Ghosts and Assassin's Creed IV: Black Flag which ran at 1080p on the PS4, but in 720p and 900p, respectively, on the Xbox One. Demand for PlayStation 4 was strong. In August 2013, Sony announced the placement of over a million preorders for the console, while on the North American launch alone, one million PlayStation 4 consoles were sold. In the UK, the PlayStation 4 became the best-selling console at launch, with the sale of 250,000 consoles within a 48-hour period and 530,000 in the first five weeks. On January 7, 2014, Andrew House announced in his Consumer Electronics Show keynote speech that 4.2 million PS4 units had been sold-through by the end of 2013, with more than 9.7 million software units sold. On February 18, 2014, Sony announced that, as of February 8, it had sold over 5.3 million console units following the release of the PS4 onto the North American and Western/Central European markets. Within the first two days of release in Japan during the weekend of February 22, 2014, 322,083 consoles were sold. PS4 software unit sales surpassed 20.5 million on April 13, 2014. During Japan's 2013 fiscal year, heightened demand for the PS4 helped Sony top global console sales, beating Nintendo for the first time in eight years. According to data released by Nielsen in August 2014, nine months after the PS4 was released, thirty-one percent of its sales were to existing Wii and Xbox 360 owners, none of whom had by then owned a PS3. At Gamescom 2014, it was announced that 10 million PS4 units had been sold-through to consumers worldwide, and on November 13, it was announced that the PlayStation 4 was the top-selling console in the U.S. for the tenth consecutive month. In its first sales announcement of 2015, Sony confirmed on January 4 that it had sold-through 18.5 million PlayStation 4 units. Sony updated the sell-through figures for the system throughout 2015: over 20 million consoles as of March 3, 2015[update], over 30 million as of November 22, 2015[update], and over 35 million by the end of 2015. As of May 22, 2016, total worldwide sell-through reached 40 million. As of December 2018, over 91 million consoles and more than 876 million PlayStation 4 games have been sold worldwide. By October 2019, the PS4 had sold 102.8 million times, making it the second best-selling video game console of all time, behind the PlayStation 2. The PlayStation 4 holds a market share of at least 70% within all European countries, as of June 2015[update]. Hardware revisions The PlayStation 4 has been produced in various models: the original, the Slim, and the Pro. Successive models have added or removed various features, and each model has variations of Limited Edition consoles. On September 7, 2016, Sony announced a hardware revision of the PlayStation 4, model number CUH-2000, known colloquially as the PlayStation 4 Slim, which phased out the original model. It is a revision of the original PS4 hardware with a smaller form factor; it has a rounded body with a matte finish on the top of the console rather than a two-tone finish, and is 40% smaller in size than the original model. The two USB ports on the front have been updated to the newer USB 3.1 standard and have a larger gap between them, and the optical audio port was removed. This model also features support for USB 3.1, Bluetooth 4.0 and 5.0 GHz Wi-Fi. It was released on September 15, 2016, with a 500 GB model at the same price as the original version of the PlayStation 4. On April 18, 2017, Sony announced that it had replaced this base model with a 1 TB version at the same MSRP. The PlayStation 4 Pro (codenamed Neo, model number CUH-7000) was announced on September 7, 2016, and launched worldwide on November 10, 2016. It is an upgraded version of the PlayStation 4 with improved hardware to enable 4K rendering and improved PlayStation VR performance, including an upgraded GPU with 4.2 teraflops of processing power and hardware support for checkerboard rendering, and a higher CPU clock. As with PS4 "Slim", this model also features support for USB 3.1, Bluetooth 4.0 and 5.0 GHz Wi-Fi. The PS4 Pro also includes 1 GB of DDR3 memory that is used to swap out non-gaming applications that run in the background, allowing games to utilize an additional 512 MB of the console's GDDR5 memory. Although capable of streaming 4K video, the PS4 Pro does not support Ultra HD Blu-ray. The Pro model has a release price of $399 (NA), €399 (Europe), and £349 (UK). Games marketed by Sony as PS4 Pro Enhanced have specific optimizations when played on this model, such as 4K resolution graphics or higher performance. For games not specifically optimized, an option known as "Boost Mode" was added on system software 4.5, which can be enabled to force higher CPU and GPU clock rates to possibly improve performance. Rendering games at 4K resolution is achieved through various rendering techniques and hardware features; PlayStation technical chief Mark Cerny explained that Sony could not "brute force" 4K without compromising form factor and cost, so the console was designed to support "streamlined rendering techniques" using custom hardware, "best-in-breed temporal and spatial anti-aliasing algorithms", and "many new features from the AMD Polaris architecture as well as several even beyond it". The most prominent technique used is checkerboard rendering, wherein the console only renders portions of a scene using a checkerboard pattern, and then uses algorithms to fill in the non-rendered segments. The checkerboarded screen can then be smoothed using an anti-aliasing filter. Hermen Hulst of Guerrilla Games explained that PS4 Pro could render something "perceptively so close [to 4K] that you wouldn't be able to see the difference". PS4 Pro supports Remote Play, Share Play, and streaming at up to 1080p resolution at 60 frames per second, as well as capturing screenshots at 2160p, and 1080p video at 30 frames per second. In late 2017, Sony issued a new PS4 Pro revision (model number CUH-7100) that featured updated internal components. The actual hardware specifications and performance remained the same as the original model, although it was found that the revised console has a slightly quieter fan profile than the original and as a result was operating at a slightly higher temperature under load than the CUH-7000. In October 2018, Sony quietly issued another revision (model number CUH-7200), initially as part of Red Dead Redemption 2 hardware bundles. The revision has a different power supply which uses the same type of cord as the "Slim" model, and was shown to have further improvements to acoustics. Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/TvOS] | [TOKENS: 2637] |
Contents tvOS tvOS (formerly Apple TV Software) is an operating system developed by Apple for the Apple TV, a digital media player. In the first-generation Apple TV, Apple TV Software was based on Mac OS X. The software for the second-generation and later Apple TVs is based on the iOS operating system and has many similar frameworks, technologies, and concepts. The second- and third-generation Apple TV have several built-in applications, but do not support third-party applications. On September 9, 2015, Apple announced the fourth-generation Apple TV, with support for third-party applications. Apple also changed the name of the Apple TV operating system to tvOS, adopting the camel case nomenclature that they were using for their other operating systems, iOS and watchOS. The latest version, tvOS 26, was released on September 15, 2025. History On October 30, 2015, the fourth-generation Apple TV became available; it shipped with tvOS 9.0. On November 9, 2015, tvOS 9.0.1 was released, primarily an update to address minor issues. tvOS 9.1 was released on December 8, 2015, along with OS X 10.11.2, iOS 9.2, and watchOS 2.1. Apple also updated the Remote apps on iOS and watchOS, enabling basic remote functionality for the fourth-generation Apple TV (previously, the app only worked with past versions of Apple TV). On November 25, 2015, Facebook debuted their SDK for tvOS, allowing applications to log into Facebook, share to Facebook, and use Facebook Analytics in the same way that iOS applications can. On December 2, 2015, Twitter debuted their login authentication service for tvOS – "Digits" – allowing users to log into apps and services with a simple, unique code available online. On June 13, 2016, at WWDC 2016, Apple SVP of Internet Services Eddy Cue announced tvOS 10. It brought new functionality, such as Siri search improvements, single sign-on for cable subscriptions, a dark mode, and a new Remote application for controlling the Apple TV; it was released on September 13, 2016, along with iOS 10. On June 4, 2018, at WWDC 2018, tvOS 12 was announced. It brought support for Dolby Atmos E-AC3 and was released on September 17, 2018, along with iOS 12. On April 13, 2020, it was discovered that Apple's Siri Smart Speaker HomePod began to run variants of the tvOS software. On June 22, 2020, at WWDC 2020, tvOS 14 was announced. It brought support for the Home app and 4K YouTube videos and was released on September 16, 2020, along with iOS 14 and iPadOS 14. On June 7, 2021, at WWDC 2021, tvOS 15 was announced. It brought new features and improvements, including SharePlay, a new "Shared with You" section on the TV app, and the ability to play content via voice command. It was released on September 20, 2021, along with iOS 15 and iPadOS 15. On June 6, 2022, at WWDC 2022, tvOS 16 was announced. It brought support for Nintendo Switch's Joy-Con and Pro Controllers and additional Bluetooth and USB game controllers. It was released on September 12, 2022, along with iOS 16. On June 5, 2023, at WWDC 2023, tvOS 17 was announced. tvOS 17 brings new features, such as support for FaceTime and video conferencing apps when paired with an iPhone or iPad, a redesigned control center interface, and third-party VPN support. It was released on September 18, 2023, along with iOS 17 and iPadOS 17. On June 10, 2024, at WWDC 2024, tvOS 18 was announced. It was released on September 16, 2024, along with iOS 18 and iPadOS 18. On June 9, 2025, at WWDC 2025, tvOS 26 was announced. It is the first tvOS to feature the new Liquid Glass design and a new year-based numbering system. It was released on September 15, 2025, along with iOS 26 and iPadOS 26. Features tvOS 9 shipped with several new features on the fourth-generation Apple TV. One major new feature was the ability to move through the interface with the new touchpad remote using multi-touch gestures. It also introduced a new App Store in which users can download and install new applications (such as apps and games) made available by developers for the Apple TV and tvOS. tvOS 9 adds support for Siri, which offers a multitude of features such as a cross-application search for a movie/TV show, rewind, fast forward, name and actor/director of the current movie, and skip back 15 seconds. tvOS added support for an application switcher on the Apple TV, more application customization options, cinematic screensavers, and control the TV using the included Siri Remote with the built-in support for HDMI-CEC in tvOS. In addition, tvOS allows the user to control the Apple TV in many different ways, such as using the included Siri Remote, pairing a third-party universal remote, pairing an MFi Gamepad to control games, using the Remote app on iOS, and pairing a Bluetooth keyboard to aid in the typing experience of the user. Accessibility tvOS, derived from iOS, incorporates many of the accessibility features found in iOS and macOS. These include VoiceOver, Zoom, and Siri, which support users who are blind or have low vision. VoiceOver, a screen reader available in over 30 languages, provides spoken descriptions of on-screen content and supports navigation through gestures such as flicks, taps, and the rotor. The system includes options to increase screen contrast by reducing background transparency in various interface elements. A high-contrast cursor can be enabled to highlight focused content, and a Reduce Motion setting simplifies certain animations to minimize visual strain. tvOS supports audio descriptions for films, indicated by the AD (Audio Description) icon in the iTunes Store and in iTunes on macOS and Windows. Bluetooth keyboard support is also available. When used with VoiceOver, characters are read aloud as they are typed and confirmed. While designed for Apple's keyboards, the system is compatible with most third-party Bluetooth keyboards. Closed captioning and SDH (Subtitles for the Deaf or Hard-of-Hearing) are supported for video content, with customizable caption styles and fonts. Compatible media is marked with CC or SDH icons in the iTunes Store. The Siri Remote allows for customization of the touch surface, including tracking speed adjustments and the option to disable the touch functionality entirely in second-generation or later models, using directional buttons instead. Apple's Remote app for iOS devices can also control Apple TV. It includes support for Switch Control, which enables users with motor impairments to navigate the interface using compatible switch devices. Development tvOS 9 shipped with all-new development tools for developers, adding support for a new SDK for developers to build apps for the TV including all of the APIs included in iOS 9 such as Metal. It also adds the tvOS App Store which allows users to browse, download, and install a wide variety of applications. In addition, developers can now use their own interface inside of their application rather than only being able to use Apple's interface. Since tvOS is based on iOS, it is easy to port existing iOS apps to the Apple TV with Xcode while making only a few refinements to the app to better suit the larger screen. Apple provides Xcode free to all registered Apple developers. To develop for the new Apple TV, it is necessary to make a parallax image for the application icon. In order to do this, Apple provides a Parallax exporter and previewer in the development tools for the Apple TV. Version history Information about software updates for Apple TV (2nd generation) onwards is published on Apple's support website. Apple TV software 1.0 presented the user with an interface similar to that of Front Row. Like Front Row on the Mac, it presents the user with seven options for consuming content. Movies, TV Shows, Music, Podcasts, Photos, Settings, and Sources. It was a modified version of Mac OS X 10.4 Tiger. In February 2008, Apple released a major and free upgrade to the Apple TV, labelled "Take Two" (2.0). This update did away with Front Row and introduced a new interface in which content was organized into six categories, all of which appeared in a large square box on the screen upon startup (movies, TV shows, music, YouTube, podcasts, and photos) and presented in the initial menu, along with a "Settings" option for configuration, including software updates. It also made updates over the air, meaning the computer was no longer required. In October 2009, Apple released another update for the Apple TV called "Apple TV Software 3.0". This update replaced the interface in version 2.0 with a new interface which presented seven horizontal columns across the top of the screen for the different categories of content (Movies, TV Shows, Music, Podcasts, Photos, Internet, and Settings). This update also added features such as content filtering, iTunes Extras, new fonts, and a new Internet radio app. One new feature in particular was the 'Genius' playlist option allowing for easier and more user friendly playlist creating. Apple TV Software 4, based on iOS 4 and 5, was the first version of Apple TV Software available on the Apple TV (2nd generation). It ended support for the Apple TV (1st generation). Apple TV Software 4.4 brought My Photo Stream, AirPlay mirroring (from iPhone 4S and iPad 2), NHL, Wall Street Journal, slideshow themes and Netflix subtitles. Contrary to rumors and code found in iOS 5, the release did not bring support for Bluetooth or apps to the Apple TV (2nd generation). Bug fixes Bug fixes Bug fixes Upgrading from this version resets the device to factory settings as part of the upgrade process. On September 24, 2012, Apple TV (2nd generation) onwards received the Apple TV Software 5 software update, based on iOS 5 and 6, with Shared Photo Streams, iTunes account switching, better AirPlay functionality, and Trailers searching, among other smaller improvements. Further channels added on August 27, 2013, without a software update: Vevo, Weather Channel, Disney Channel, Disney XD, and Smithsonian Channel (some sources depend on country) On September 20, 2013, Apple TV (second generation) onwards received the Apple TV Software 6 software update, based on iOS 7, with iTunes Radio and AirPlay from iCloud. Third-party US-only content added on September 26, 2013, without a software update: Major League Soccer (MLS) and Disney Junior. iMovie Theater app was added on October 22, 2013, without a software update. Final release on Apple TV (2nd generation) As of May 2015, the YouTube app only works on newer Apple TVs that have software 7.2 or later due to an API change implemented by Google. On September 18, 2014, the third generation Apple TV received the Apple TV Software 7.0 software update based on iOS 8, with a redesigned UI, Family Sharing and peer-to-peer AirPlay. This release dropped support for the second generation Apple TV. As of May 2015, the YouTube app only works on Apple TVs that have software 7.2 or later due to an API change implemented by Google. Amazon Video was automatically added to Apple TVs running 7.2.2 on December 6, 2017. tvOS 9 is based on iOS 9, with adaptations made for a television interface. It was announced on September 9, 2015, alongside the first-generation iPad Pro and the iPhone 6S. Tim Cook introduced tvOS, calling it a modern OS with support for apps. It was only available on the Apple TV (4th generation), released in October 2015. It adds a native SDK to develop apps, an App Store to distribute them, Siri, and universal search across multiple apps. Initial release on Apple TV (4th generation). Bug fixes and improvements Security fixes See also Other operating systems developed by Apple: Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/World#cite_note-18] | [TOKENS: 5641] |
Contents World The world is the totality of entities, the whole of reality, or everything that exists. The nature of the world has been conceptualized differently in different fields. Some conceptions see the world as unique, while others talk of a "plurality of worlds". Some treat the world as one simple object, while others analyze the world as a complex made up of parts. In scientific cosmology, the world or universe is commonly defined as "the totality of all space and time; all that is, has been, and will be". Theories of modality talk of possible worlds as complete and consistent ways how things could have been. Phenomenology, starting from the horizon of co-given objects present in the periphery of every experience, defines the world as the biggest horizon, or the "horizon of all horizons". In philosophy of mind, the world is contrasted with the mind as that which is represented by the mind. Theology conceptualizes the world in relation to God, for example, as God's creation, as identical to God, or as the two being interdependent. In religions, there is a tendency to downgrade the material or sensory world in favor of a spiritual world to be sought through religious practice. A comprehensive representation of the world and our place in it, as is found in religions, is known as a worldview. Cosmogony is the field that studies the origin or creation of the world, while eschatology refers to the science or doctrine of the last things or of the end of the world. In various contexts, the term "world" takes a more restricted meaning associated, for example, with the Earth and all life on it, with humanity as a whole, or with an international or intercontinental scope. In this sense, world history refers to the history of humanity as a whole, and world politics is the discipline of political science studying issues that transcend nations and continents. Other examples include terms such as "world religion", "world language", "world government", "world war", "world population", "world economy", or "world championship". Etymology The English word world comes from the Old English weorold. The Old English is a reflex of the Common Germanic *weraldiz, a compound of weraz 'man' and aldiz 'age', thus literally meaning roughly 'age of man'; this word led to Old Frisian warld, Old Saxon werold, Old Dutch werolt, Old High German weralt, and Old Norse verǫld. The corresponding word in Latin is mundus, literally 'clean, elegant', itself a loan translation of Greek cosmos 'orderly arrangement'. While the Germanic word thus reflects a mythological notion of a "domain of Man" (compare Midgard), presumably as opposed to the divine sphere on the one hand and the chthonic sphere of the underworld on the other, the Greco-Latin term expresses a notion of creation as an act of establishing order out of chaos. Conceptions Different fields often work with quite different conceptions of the essential features associated with the term "world". Some conceptions see the world as unique: there can be no more than one world. Others talk of a "plurality of worlds". Some see worlds as complex things composed of many substances as their parts while others hold that worlds are simple in the sense that there is only one substance: the world as a whole. Some characterize worlds in terms of objective spacetime while others define them relative to the horizon present in each experience. These different characterizations are not always exclusive: it may be possible to combine some without leading to a contradiction. Most of them agree that worlds are unified totalities. Monism is a thesis about oneness: that only one thing exists in a certain sense. The denial of monism is pluralism, the thesis that, in a certain sense, more than one thing exists. There are many forms of monism and pluralism, but in relation to the world as a whole, two are of special interest: existence monism/pluralism and priority monism/pluralism. Existence monism states that the world is the only concrete object there is. This means that all the concrete "objects" we encounter in our daily lives, including apples, cars and ourselves, are not truly objects in a strict sense. Instead, they are just dependent aspects of the world-object. Such a world-object is simple in the sense that it does not have any genuine parts. For this reason, it has also been referred to as "blobject" since it lacks an internal structure like a blob. Priority monism allows that there are other concrete objects besides the world. But it holds that these objects do not have the most fundamental form of existence, that they somehow depend on the existence of the world. The corresponding forms of pluralism state that the world is complex in the sense that it is made up of concrete, independent objects. Scientific cosmology can be defined as the science of the universe as a whole. In it, the terms "universe" and "cosmos" are usually used as synonyms for the term "world". One common definition of the world/universe found in this field is as "[t]he totality of all space and time; all that is, has been, and will be". Some definitions emphasize that there are two other aspects to the universe besides spacetime: forms of energy or matter, like stars and particles, and laws of nature. World-conceptions in this field differ both concerning their notion of spacetime and of the contents of spacetime. The theory of relativity plays a central role in modern cosmology and its conception of space and time. A difference from its predecessors is that it conceives space and time not as distinct dimensions but as a single four-dimensional manifold called spacetime. This can be seen in special relativity in relation to the Minkowski metric, which includes both spatial and temporal components in its definition of distance. General relativity goes one step further by integrating the concept of mass into the concept of spacetime as its curvature. Quantum cosmology uses a classical notion of spacetime and conceives the whole world as one big wave function expressing the probability of finding particles in a given location. The world-concept plays a role in many modern theories of modality, sometimes in the form of possible worlds. A possible world is a complete and consistent way how things could have been. The actual world is a possible world since the way things are is a way things could have been. There are many other ways things could have been besides how they actually are. For example, Hillary Clinton did not win the 2016 US election, but she could have won. So there is a possible world in which she did. There is a vast number of possible worlds, one corresponding to each such difference, no matter how small or big, as long as no outright contradictions are introduced this way. Possible worlds are often conceived as abstract objects, for example, in terms of non-obtaining states of affairs or as maximally consistent sets of propositions. On such a view, they can even be seen as belonging to the actual world. Another way to conceive possible worlds, made famous by David Lewis, is as concrete entities. On this conception, there is no important difference between the actual world and possible worlds: both are conceived as concrete, inclusive and spatiotemporally connected. The only difference is that the actual world is the world we live in, while other possible worlds are not inhabited by us but by our counterparts. Everything within a world is spatiotemporally connected to everything else but the different worlds do not share a common spacetime: They are spatiotemporally isolated from each other. This is what makes them separate worlds. It has been suggested that, besides possible worlds, there are also impossible worlds. Possible worlds are ways things could have been, so impossible worlds are ways things could not have been. Such worlds involve a contradiction, like a world in which Hillary Clinton both won and lost the 2016 US election. Both possible and impossible worlds have in common the idea that they are totalities of their constituents. Within phenomenology, worlds are defined in terms of horizons of experiences. When we perceive an object, like a house, we do not just experience this object at the center of our attention but also various other objects surrounding it, given in the periphery. The term "horizon" refers to these co-given objects, which are usually experienced only in a vague, indeterminate manner. The perception of a house involves various horizons, corresponding to the neighborhood, the city, the country, the Earth, etc. In this context, the world is the biggest horizon or the "horizon of all horizons". It is common among phenomenologists to understand the world not just as a spatiotemporal collection of objects but as additionally incorporating various other relations between these objects. These relations include, for example, indication-relations that help us anticipate one object given the appearances of another object and means-end-relations or functional involvements relevant for practical concerns. In philosophy of mind, the term "world" is commonly used in contrast to the term "mind" as that which is represented by the mind. This is sometimes expressed by stating that there is a gap between mind and world and that this gap needs to be overcome for representation to be successful. One problem in philosophy of mind is to explain how the mind is able to bridge this gap and to enter into genuine mind-world-relations, for example, in the form of perception, knowledge or action. This is necessary for the world to be able to rationally constrain the activity of the mind. According to a realist position, the world is something distinct and independent from the mind. Idealists conceive of the world as partially or fully determined by the mind. Immanuel Kant's transcendental idealism, for example, posits that the spatiotemporal structure of the world is imposed by the mind on reality but lacks independent existence otherwise. A more radical idealist conception of the world can be found in Berkeley's subjective idealism, which holds that the world as a whole, including all everyday objects like tables, cats, trees and ourselves, "consists of nothing but minds and ideas". Different theological positions hold different conceptions of the world based on its relation to God. Classical theism states that God is wholly distinct from the world. But the world depends for its existence on God, both because God created the world and because He maintains or conserves it. This is sometimes understood in analogy to how humans create and conserve ideas in their imagination, with the difference being that the divine mind is vastly more powerful. On such a view, God has absolute, ultimate reality in contrast to the lower ontological status ascribed to the world. God's involvement in the world is often understood along the lines of a personal, benevolent God who looks after and guides His creation. Deists agree with theists that God created the world but deny any subsequent, personal involvement in it. Pantheists reject the separation between God and world. Instead, they claim that the two are identical. This means that there is nothing to the world that does not belong to God and that there is nothing to God beyond what is found in the world. Panentheism constitutes a middle ground between theism and pantheism. Against theism, it holds that God and the world are interrelated and depend on each other. Against pantheism, it holds that there is no outright identity between the two. History of philosophy In philosophy, the term world has several possible meanings. In some contexts, it refers to everything that makes up reality or the physical universe. In others, it can mean have a specific ontological sense (see world disclosure). While clarifying the concept of world has arguably always been among the basic tasks of Western philosophy, this theme appears to have been raised explicitly only at the start of the twentieth century, Plato is well known for his theory of forms, which posits the existence of two different worlds: the sensible world and the intelligible world. The sensible world is the world we live in, filled with changing physical things we can see, touch and interact with. The intelligible world is the world of invisible, eternal, changeless forms like goodness, beauty, unity and sameness. Plato ascribes a lower ontological status to the sensible world, which only imitates the world of forms. This is due to the fact that physical things exist only to the extent that they participate in the forms that characterize them, while the forms themselves have an independent manner of existence. In this sense, the sensible world is a mere replication of the perfect exemplars found in the world of forms: it never lives up to the original. In the allegory of the cave, Plato compares the physical things we are familiar with to mere shadows of the real things. But not knowing the difference, the prisoners in the cave mistake the shadows for the real things. Two definitions that were both put forward in the 1920s, however, suggest the range of available opinion. "The world is everything that is the case", wrote Ludwig Wittgenstein in his influential Tractatus Logico-Philosophicus, first published in 1921. Martin Heidegger, meanwhile, argued that "the surrounding world is different for each of us, and notwithstanding that we move about in a common world". "World" is one of the key terms in Eugen Fink's philosophy. He thinks that there is a misguided tendency in western philosophy to understand the world as one enormously big thing containing all the small everyday things we are familiar with. He sees this view as a form of forgetfulness of the world and tries to oppose it by what he calls the "cosmological difference": the difference between the world and the inner-worldly things it contains. On his view, the world is the totality of the inner-worldly things that transcends them. It is itself groundless but it provides a ground for things. It therefore cannot be identified with a mere container. Instead, the world gives appearance to inner-worldly things, it provides them with a place, a beginning and an end. One difficulty in investigating the world is that we never encounter it since it is not just one more thing that appears to us. This is why Fink uses the notion of play or playing to elucidate the nature of the world. He sees play as a symbol of the world that is both part of it and that represents it. Play usually comes with a form of imaginary play-world involving various things relevant to the play. But just like the play is more than the imaginary realities appearing in it so the world is more than the actual things appearing in it. The concept of worlds plays a central role in Nelson Goodman's late philosophy. He argues that we need to posit different worlds in order to account for the fact that there are different incompatible truths found in reality. Two truths are incompatible if they ascribe incompatible properties to the same thing. This happens, for example, when we assert both that the earth moves and that the earth is at rest. These incompatible truths correspond to two different ways of describing the world: heliocentrism and geocentrism. Goodman terms such descriptions "world versions". He holds a correspondence theory of truth: a world version is true if it corresponds to a world. Incompatible true world versions correspond to different worlds. It is common for theories of modality to posit the existence of a plurality of possible worlds. But Goodman's theory is different since it posits a plurality not of possible but of actual worlds. Such a position is in danger of involving a contradiction: there cannot be a plurality of actual worlds if worlds are defined as maximally inclusive wholes. This danger may be avoided by interpreting Goodman's world-concept not as maximally inclusive wholes in the absolute sense but in relation to its corresponding world-version: a world contains all and only the entities that its world-version describes. Religion Mythological cosmologies depict the world as centered on an axis mundi and delimited by a boundary such as a world ocean, a world serpent or similar. Hinduism constitutes a family of religious-philosophical views. These views present perspectives on the nature and role of the world. Samkhya philosophy, for example, is a metaphysical dualism that understands reality as comprising 2 parts: purusha and prakriti. The term "purusha" stands for the individual conscious self that each of "us" possesses. Prakriti, on the other hand, is the 1 world inhabited by all these selves. Samkhya understands this world as a world of matter governed by the law of cause and effect. The term "matter" is understood in a sense in this tradition including physical and mental aspects. This is reflected in the doctrine of tattvas, according to which prakriti is made up of 23 principles or elements of reality. These principles include physical elements, like water or earth, and mental aspects, like intelligence or sense-impressions. The relation between purusha and prakriti is conceived as 1 of observation: purusha is the conscious self aware of the world of prakriti and does not causally interact with it. A conception of the world is present in Advaita Vedanta, the monist school among the Vedanta schools. Unlike the realist position defended in Samkhya philosophy, Advaita Vedanta sees the world of multiplicity as an illusion, referred to as Maya. This illusion includes impression of existing as separate experiencing selfs called Jivas. Instead, Advaita Vedanta teaches that on the most fundamental level of reality, referred to as Brahman, there exists no plurality or difference. All there is is 1 all-encompassing self: Atman. Ignorance is seen as the source of this illusion, which results in bondage to the world of mere appearances. Liberation is possible in the course of overcoming this illusion by acquiring the knowledge of Brahman, according to Advaita Vedanta. Contemptus mundi is the name given to the belief that the world, in all its vanity, is nothing more than a futile attempt to hide from God by stifling our desire for the good and the holy. This view has been characterised as a "pastoral of fear" by historian Jean Delumeau. "The world, the flesh, and the devil" is a traditional division of the sources of temptation. Orbis Catholicus is a Latin phrase meaning "Catholic world", per the expression Urbi et Orbi, and refers to that area of Christendom under papal supremacy. In Islam, the term "dunya" is used for the world. Its meaning is derived from the root word "dana", a term for "near". It is associated with the temporal, sensory world and earthly concerns, i.e. with this world in contrast to the spiritual world. Religious teachings warn of a tendency to seek happiness in this world and advise a more ascetic lifestyle concerned with the afterlife. Other strands in Islam recommend a balanced approach. In Mandaean cosmology, the world or earthly realm is known as Tibil. It is separated from the World of Light (alma d-nhūra) above and the World of Darkness (alma d-hšuka) below by aether (ayar). Related terms and problems A worldview is a comprehensive representation of the world and our place in it. As a representation, it is a subjective perspective of the world and thereby different from the world it represents. All higher animals need to represent their environment in some way in order to navigate it. But it has been argued that only humans possess a representation encompassing enough to merit the term "worldview". Philosophers of worldviews commonly hold that the understanding of any object depends on a worldview constituting the background on which this understanding can take place. This may affect not just our intellectual understanding of the object in question but the experience of it in general. It is therefore impossible to assess one's worldview from a neutral perspective since this assessment already presupposes the worldview as its background. Some hold that each worldview is based on a single hypothesis that promises to solve all the problems of our existence we may encounter. On this interpretation, the term is closely associated to the worldviews given by different religions. Worldviews offer orientation not just in theoretical matters but also in practical matters. For this reason, they usually include answers to the question of the meaning of life and other evaluative components about what matters and how we should act. A worldview can be unique to one individual but worldviews are usually shared by many people within a certain culture or religion. The idea that there exist many different worlds is found in various fields. For example, theories of modality talk about a plurality of possible worlds and the many-worlds interpretation of quantum mechanics carries this reference even in its name. Talk of different worlds is also common in everyday language, for example, with reference to the world of music, the world of business, the world of football, the world of experience or the Asian world. But at the same time, worlds are usually defined as all-inclusive totalities. This seems to contradict the very idea of a plurality of worlds since if a world is total and all-inclusive then it cannot have anything outside itself. Understood this way, a world can neither have other worlds besides itself or be part of something bigger. One way to resolve this paradox while holding onto the notion of a plurality of worlds is to restrict the sense in which worlds are totalities. On this view, worlds are not totalities in an absolute sense. This might be even understood in the sense that, strictly speaking, there are no worlds at all. Another approach understands worlds in a schematic sense: as context-dependent expressions that stand for the current domain of discourse. So in the expression "Around the World in Eighty Days", the term "world" refers to the earth while in the colonial expression "the New World" it refers to the landmass of North and South America. Cosmogony is the field that studies the origin or creation of the world. This includes both scientific cosmogony and creation myths found in various religions. The dominant theory in scientific cosmogony is the Big Bang theory, according to which both space, time and matter have their origin in one initial singularity occurring about 13.8 billion years ago. This singularity was followed by an expansion that allowed the universe to sufficiently cool down for the formation of subatomic particles and later atoms. These initial elements formed giant clouds, which would then coalesce into stars and galaxies. Non-scientific creation myths are found in many cultures and are often enacted in rituals expressing their symbolic meaning. They can be categorized concerning their contents. Types often found include creation from nothing, from chaos or from a cosmic egg. Eschatology refers to the science or doctrine of the last things or of the end of the world. It is traditionally associated with religion, specifically with the Abrahamic religions. In this form, it may include teachings both of the end of each individual human life and of the end of the world as a whole. But it has been applied to other fields as well, for example, in the form of physical eschatology, which includes scientifically based speculations about the far future of the universe. According to some models, there will be a Big Crunch in which the whole universe collapses back into a singularity, possibly resulting in a second Big Bang afterward. But current astronomical evidence seems to suggest that our universe will continue to expand indefinitely. World history studies the world from a historical perspective. Unlike other approaches to history, it employs a global viewpoint. It deals less with individual nations and civilizations, which it usually approaches at a high level of abstraction. Instead, it concentrates on wider regions and zones of interaction, often interested in how people, goods and ideas move from one region to another. It includes comparisons of different societies and civilizations as well as considering wide-ranging developments with a long-term global impact like the process of industrialization. Contemporary world history is dominated by three main research paradigms determining the periodization into different epochs. One is based on productive relations between humans and nature. The two most important changes in history in this respect were the introduction of agriculture and husbandry concerning the production of food, which started around 10,000 to 8,000 BCE and is sometimes termed the Neolithic Revolution, and the Industrial Revolution, which started around 1760 CE and involved the transition from manual to industrial manufacturing. Another paradigm, focusing on culture and religion instead, is based on Karl Jaspers' theories about the Axial Age, a time in which various new forms of religious and philosophical thoughts appeared in several separate parts of the world around the time between 800 and 200 BCE. A third periodization is based on the relations between civilizations and societies. According to this paradigm, history can be divided into three periods in relation to the dominant region in the world: Middle Eastern dominance before 500 BCE, Eurasian cultural balance until 1500 CE and Western dominance since 1500 CE. Big History employs an even wider framework than world history by putting human history into the context of the history of the universe as a whole. It starts with the Big Bang and traces the formation of galaxies, the Solar System, the Earth, its geological eras, the evolution of life and humans until the present day. World politics, also referred to as global politics or international relations, is the discipline of political science studying issues of interest to the world that transcend nations and continents. It aims to explain complex patterns found in the social world that are often related to the pursuit of power, order and justice, usually in the context of globalization. It focuses not just on the relations between nation-states but also considers other transnational actors, like multinational corporations, terrorist groups, or non-governmental organizations. For example, it tries to explain events such as the September 11 attacks, the 2003 invasion of Iraq or the 2008 financial crisis. Various theories have been proposed in order to deal with the complexity involved in formulating such explanations. These theories are sometimes divided into realism, liberalism and constructivism. Realists see nation-states as the main actors in world politics. They constitute an anarchical international system without any overarching power to control their behavior. They are seen as sovereign agents that, determined by human nature, act according to their national self-interest. Military force may play an important role in the ensuing struggle for power between states, but diplomacy and cooperation are also key mechanisms for nations to achieve their goals. Liberalists acknowledge the importance of states but they also emphasize the role of transnational actors, like the United Nations or the World Trade Organization. They see humans as perfectible and stress the role of democracy in this process. The emergent order in world politics, on this perspective, is more complex than a mere balance of power since more different agents and interests are involved in its production. Constructivism ascribes more importance to the agency of individual humans than realism and liberalism. It understands the social world as a construction of the people living in it. This leads to an emphasis on the possibility of change. If the international system is an anarchy of nation-states, as the realists hold, then this is only so because we made it this way and may change since this is not prefigured by human nature, according to the constructivists. See also References External links Africa Antarctica Asia Australia Europe North America South America Afro-Eurasia Americas Eurasia Oceania |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/World#cite_note-17] | [TOKENS: 5641] |
Contents World The world is the totality of entities, the whole of reality, or everything that exists. The nature of the world has been conceptualized differently in different fields. Some conceptions see the world as unique, while others talk of a "plurality of worlds". Some treat the world as one simple object, while others analyze the world as a complex made up of parts. In scientific cosmology, the world or universe is commonly defined as "the totality of all space and time; all that is, has been, and will be". Theories of modality talk of possible worlds as complete and consistent ways how things could have been. Phenomenology, starting from the horizon of co-given objects present in the periphery of every experience, defines the world as the biggest horizon, or the "horizon of all horizons". In philosophy of mind, the world is contrasted with the mind as that which is represented by the mind. Theology conceptualizes the world in relation to God, for example, as God's creation, as identical to God, or as the two being interdependent. In religions, there is a tendency to downgrade the material or sensory world in favor of a spiritual world to be sought through religious practice. A comprehensive representation of the world and our place in it, as is found in religions, is known as a worldview. Cosmogony is the field that studies the origin or creation of the world, while eschatology refers to the science or doctrine of the last things or of the end of the world. In various contexts, the term "world" takes a more restricted meaning associated, for example, with the Earth and all life on it, with humanity as a whole, or with an international or intercontinental scope. In this sense, world history refers to the history of humanity as a whole, and world politics is the discipline of political science studying issues that transcend nations and continents. Other examples include terms such as "world religion", "world language", "world government", "world war", "world population", "world economy", or "world championship". Etymology The English word world comes from the Old English weorold. The Old English is a reflex of the Common Germanic *weraldiz, a compound of weraz 'man' and aldiz 'age', thus literally meaning roughly 'age of man'; this word led to Old Frisian warld, Old Saxon werold, Old Dutch werolt, Old High German weralt, and Old Norse verǫld. The corresponding word in Latin is mundus, literally 'clean, elegant', itself a loan translation of Greek cosmos 'orderly arrangement'. While the Germanic word thus reflects a mythological notion of a "domain of Man" (compare Midgard), presumably as opposed to the divine sphere on the one hand and the chthonic sphere of the underworld on the other, the Greco-Latin term expresses a notion of creation as an act of establishing order out of chaos. Conceptions Different fields often work with quite different conceptions of the essential features associated with the term "world". Some conceptions see the world as unique: there can be no more than one world. Others talk of a "plurality of worlds". Some see worlds as complex things composed of many substances as their parts while others hold that worlds are simple in the sense that there is only one substance: the world as a whole. Some characterize worlds in terms of objective spacetime while others define them relative to the horizon present in each experience. These different characterizations are not always exclusive: it may be possible to combine some without leading to a contradiction. Most of them agree that worlds are unified totalities. Monism is a thesis about oneness: that only one thing exists in a certain sense. The denial of monism is pluralism, the thesis that, in a certain sense, more than one thing exists. There are many forms of monism and pluralism, but in relation to the world as a whole, two are of special interest: existence monism/pluralism and priority monism/pluralism. Existence monism states that the world is the only concrete object there is. This means that all the concrete "objects" we encounter in our daily lives, including apples, cars and ourselves, are not truly objects in a strict sense. Instead, they are just dependent aspects of the world-object. Such a world-object is simple in the sense that it does not have any genuine parts. For this reason, it has also been referred to as "blobject" since it lacks an internal structure like a blob. Priority monism allows that there are other concrete objects besides the world. But it holds that these objects do not have the most fundamental form of existence, that they somehow depend on the existence of the world. The corresponding forms of pluralism state that the world is complex in the sense that it is made up of concrete, independent objects. Scientific cosmology can be defined as the science of the universe as a whole. In it, the terms "universe" and "cosmos" are usually used as synonyms for the term "world". One common definition of the world/universe found in this field is as "[t]he totality of all space and time; all that is, has been, and will be". Some definitions emphasize that there are two other aspects to the universe besides spacetime: forms of energy or matter, like stars and particles, and laws of nature. World-conceptions in this field differ both concerning their notion of spacetime and of the contents of spacetime. The theory of relativity plays a central role in modern cosmology and its conception of space and time. A difference from its predecessors is that it conceives space and time not as distinct dimensions but as a single four-dimensional manifold called spacetime. This can be seen in special relativity in relation to the Minkowski metric, which includes both spatial and temporal components in its definition of distance. General relativity goes one step further by integrating the concept of mass into the concept of spacetime as its curvature. Quantum cosmology uses a classical notion of spacetime and conceives the whole world as one big wave function expressing the probability of finding particles in a given location. The world-concept plays a role in many modern theories of modality, sometimes in the form of possible worlds. A possible world is a complete and consistent way how things could have been. The actual world is a possible world since the way things are is a way things could have been. There are many other ways things could have been besides how they actually are. For example, Hillary Clinton did not win the 2016 US election, but she could have won. So there is a possible world in which she did. There is a vast number of possible worlds, one corresponding to each such difference, no matter how small or big, as long as no outright contradictions are introduced this way. Possible worlds are often conceived as abstract objects, for example, in terms of non-obtaining states of affairs or as maximally consistent sets of propositions. On such a view, they can even be seen as belonging to the actual world. Another way to conceive possible worlds, made famous by David Lewis, is as concrete entities. On this conception, there is no important difference between the actual world and possible worlds: both are conceived as concrete, inclusive and spatiotemporally connected. The only difference is that the actual world is the world we live in, while other possible worlds are not inhabited by us but by our counterparts. Everything within a world is spatiotemporally connected to everything else but the different worlds do not share a common spacetime: They are spatiotemporally isolated from each other. This is what makes them separate worlds. It has been suggested that, besides possible worlds, there are also impossible worlds. Possible worlds are ways things could have been, so impossible worlds are ways things could not have been. Such worlds involve a contradiction, like a world in which Hillary Clinton both won and lost the 2016 US election. Both possible and impossible worlds have in common the idea that they are totalities of their constituents. Within phenomenology, worlds are defined in terms of horizons of experiences. When we perceive an object, like a house, we do not just experience this object at the center of our attention but also various other objects surrounding it, given in the periphery. The term "horizon" refers to these co-given objects, which are usually experienced only in a vague, indeterminate manner. The perception of a house involves various horizons, corresponding to the neighborhood, the city, the country, the Earth, etc. In this context, the world is the biggest horizon or the "horizon of all horizons". It is common among phenomenologists to understand the world not just as a spatiotemporal collection of objects but as additionally incorporating various other relations between these objects. These relations include, for example, indication-relations that help us anticipate one object given the appearances of another object and means-end-relations or functional involvements relevant for practical concerns. In philosophy of mind, the term "world" is commonly used in contrast to the term "mind" as that which is represented by the mind. This is sometimes expressed by stating that there is a gap between mind and world and that this gap needs to be overcome for representation to be successful. One problem in philosophy of mind is to explain how the mind is able to bridge this gap and to enter into genuine mind-world-relations, for example, in the form of perception, knowledge or action. This is necessary for the world to be able to rationally constrain the activity of the mind. According to a realist position, the world is something distinct and independent from the mind. Idealists conceive of the world as partially or fully determined by the mind. Immanuel Kant's transcendental idealism, for example, posits that the spatiotemporal structure of the world is imposed by the mind on reality but lacks independent existence otherwise. A more radical idealist conception of the world can be found in Berkeley's subjective idealism, which holds that the world as a whole, including all everyday objects like tables, cats, trees and ourselves, "consists of nothing but minds and ideas". Different theological positions hold different conceptions of the world based on its relation to God. Classical theism states that God is wholly distinct from the world. But the world depends for its existence on God, both because God created the world and because He maintains or conserves it. This is sometimes understood in analogy to how humans create and conserve ideas in their imagination, with the difference being that the divine mind is vastly more powerful. On such a view, God has absolute, ultimate reality in contrast to the lower ontological status ascribed to the world. God's involvement in the world is often understood along the lines of a personal, benevolent God who looks after and guides His creation. Deists agree with theists that God created the world but deny any subsequent, personal involvement in it. Pantheists reject the separation between God and world. Instead, they claim that the two are identical. This means that there is nothing to the world that does not belong to God and that there is nothing to God beyond what is found in the world. Panentheism constitutes a middle ground between theism and pantheism. Against theism, it holds that God and the world are interrelated and depend on each other. Against pantheism, it holds that there is no outright identity between the two. History of philosophy In philosophy, the term world has several possible meanings. In some contexts, it refers to everything that makes up reality or the physical universe. In others, it can mean have a specific ontological sense (see world disclosure). While clarifying the concept of world has arguably always been among the basic tasks of Western philosophy, this theme appears to have been raised explicitly only at the start of the twentieth century, Plato is well known for his theory of forms, which posits the existence of two different worlds: the sensible world and the intelligible world. The sensible world is the world we live in, filled with changing physical things we can see, touch and interact with. The intelligible world is the world of invisible, eternal, changeless forms like goodness, beauty, unity and sameness. Plato ascribes a lower ontological status to the sensible world, which only imitates the world of forms. This is due to the fact that physical things exist only to the extent that they participate in the forms that characterize them, while the forms themselves have an independent manner of existence. In this sense, the sensible world is a mere replication of the perfect exemplars found in the world of forms: it never lives up to the original. In the allegory of the cave, Plato compares the physical things we are familiar with to mere shadows of the real things. But not knowing the difference, the prisoners in the cave mistake the shadows for the real things. Two definitions that were both put forward in the 1920s, however, suggest the range of available opinion. "The world is everything that is the case", wrote Ludwig Wittgenstein in his influential Tractatus Logico-Philosophicus, first published in 1921. Martin Heidegger, meanwhile, argued that "the surrounding world is different for each of us, and notwithstanding that we move about in a common world". "World" is one of the key terms in Eugen Fink's philosophy. He thinks that there is a misguided tendency in western philosophy to understand the world as one enormously big thing containing all the small everyday things we are familiar with. He sees this view as a form of forgetfulness of the world and tries to oppose it by what he calls the "cosmological difference": the difference between the world and the inner-worldly things it contains. On his view, the world is the totality of the inner-worldly things that transcends them. It is itself groundless but it provides a ground for things. It therefore cannot be identified with a mere container. Instead, the world gives appearance to inner-worldly things, it provides them with a place, a beginning and an end. One difficulty in investigating the world is that we never encounter it since it is not just one more thing that appears to us. This is why Fink uses the notion of play or playing to elucidate the nature of the world. He sees play as a symbol of the world that is both part of it and that represents it. Play usually comes with a form of imaginary play-world involving various things relevant to the play. But just like the play is more than the imaginary realities appearing in it so the world is more than the actual things appearing in it. The concept of worlds plays a central role in Nelson Goodman's late philosophy. He argues that we need to posit different worlds in order to account for the fact that there are different incompatible truths found in reality. Two truths are incompatible if they ascribe incompatible properties to the same thing. This happens, for example, when we assert both that the earth moves and that the earth is at rest. These incompatible truths correspond to two different ways of describing the world: heliocentrism and geocentrism. Goodman terms such descriptions "world versions". He holds a correspondence theory of truth: a world version is true if it corresponds to a world. Incompatible true world versions correspond to different worlds. It is common for theories of modality to posit the existence of a plurality of possible worlds. But Goodman's theory is different since it posits a plurality not of possible but of actual worlds. Such a position is in danger of involving a contradiction: there cannot be a plurality of actual worlds if worlds are defined as maximally inclusive wholes. This danger may be avoided by interpreting Goodman's world-concept not as maximally inclusive wholes in the absolute sense but in relation to its corresponding world-version: a world contains all and only the entities that its world-version describes. Religion Mythological cosmologies depict the world as centered on an axis mundi and delimited by a boundary such as a world ocean, a world serpent or similar. Hinduism constitutes a family of religious-philosophical views. These views present perspectives on the nature and role of the world. Samkhya philosophy, for example, is a metaphysical dualism that understands reality as comprising 2 parts: purusha and prakriti. The term "purusha" stands for the individual conscious self that each of "us" possesses. Prakriti, on the other hand, is the 1 world inhabited by all these selves. Samkhya understands this world as a world of matter governed by the law of cause and effect. The term "matter" is understood in a sense in this tradition including physical and mental aspects. This is reflected in the doctrine of tattvas, according to which prakriti is made up of 23 principles or elements of reality. These principles include physical elements, like water or earth, and mental aspects, like intelligence or sense-impressions. The relation between purusha and prakriti is conceived as 1 of observation: purusha is the conscious self aware of the world of prakriti and does not causally interact with it. A conception of the world is present in Advaita Vedanta, the monist school among the Vedanta schools. Unlike the realist position defended in Samkhya philosophy, Advaita Vedanta sees the world of multiplicity as an illusion, referred to as Maya. This illusion includes impression of existing as separate experiencing selfs called Jivas. Instead, Advaita Vedanta teaches that on the most fundamental level of reality, referred to as Brahman, there exists no plurality or difference. All there is is 1 all-encompassing self: Atman. Ignorance is seen as the source of this illusion, which results in bondage to the world of mere appearances. Liberation is possible in the course of overcoming this illusion by acquiring the knowledge of Brahman, according to Advaita Vedanta. Contemptus mundi is the name given to the belief that the world, in all its vanity, is nothing more than a futile attempt to hide from God by stifling our desire for the good and the holy. This view has been characterised as a "pastoral of fear" by historian Jean Delumeau. "The world, the flesh, and the devil" is a traditional division of the sources of temptation. Orbis Catholicus is a Latin phrase meaning "Catholic world", per the expression Urbi et Orbi, and refers to that area of Christendom under papal supremacy. In Islam, the term "dunya" is used for the world. Its meaning is derived from the root word "dana", a term for "near". It is associated with the temporal, sensory world and earthly concerns, i.e. with this world in contrast to the spiritual world. Religious teachings warn of a tendency to seek happiness in this world and advise a more ascetic lifestyle concerned with the afterlife. Other strands in Islam recommend a balanced approach. In Mandaean cosmology, the world or earthly realm is known as Tibil. It is separated from the World of Light (alma d-nhūra) above and the World of Darkness (alma d-hšuka) below by aether (ayar). Related terms and problems A worldview is a comprehensive representation of the world and our place in it. As a representation, it is a subjective perspective of the world and thereby different from the world it represents. All higher animals need to represent their environment in some way in order to navigate it. But it has been argued that only humans possess a representation encompassing enough to merit the term "worldview". Philosophers of worldviews commonly hold that the understanding of any object depends on a worldview constituting the background on which this understanding can take place. This may affect not just our intellectual understanding of the object in question but the experience of it in general. It is therefore impossible to assess one's worldview from a neutral perspective since this assessment already presupposes the worldview as its background. Some hold that each worldview is based on a single hypothesis that promises to solve all the problems of our existence we may encounter. On this interpretation, the term is closely associated to the worldviews given by different religions. Worldviews offer orientation not just in theoretical matters but also in practical matters. For this reason, they usually include answers to the question of the meaning of life and other evaluative components about what matters and how we should act. A worldview can be unique to one individual but worldviews are usually shared by many people within a certain culture or religion. The idea that there exist many different worlds is found in various fields. For example, theories of modality talk about a plurality of possible worlds and the many-worlds interpretation of quantum mechanics carries this reference even in its name. Talk of different worlds is also common in everyday language, for example, with reference to the world of music, the world of business, the world of football, the world of experience or the Asian world. But at the same time, worlds are usually defined as all-inclusive totalities. This seems to contradict the very idea of a plurality of worlds since if a world is total and all-inclusive then it cannot have anything outside itself. Understood this way, a world can neither have other worlds besides itself or be part of something bigger. One way to resolve this paradox while holding onto the notion of a plurality of worlds is to restrict the sense in which worlds are totalities. On this view, worlds are not totalities in an absolute sense. This might be even understood in the sense that, strictly speaking, there are no worlds at all. Another approach understands worlds in a schematic sense: as context-dependent expressions that stand for the current domain of discourse. So in the expression "Around the World in Eighty Days", the term "world" refers to the earth while in the colonial expression "the New World" it refers to the landmass of North and South America. Cosmogony is the field that studies the origin or creation of the world. This includes both scientific cosmogony and creation myths found in various religions. The dominant theory in scientific cosmogony is the Big Bang theory, according to which both space, time and matter have their origin in one initial singularity occurring about 13.8 billion years ago. This singularity was followed by an expansion that allowed the universe to sufficiently cool down for the formation of subatomic particles and later atoms. These initial elements formed giant clouds, which would then coalesce into stars and galaxies. Non-scientific creation myths are found in many cultures and are often enacted in rituals expressing their symbolic meaning. They can be categorized concerning their contents. Types often found include creation from nothing, from chaos or from a cosmic egg. Eschatology refers to the science or doctrine of the last things or of the end of the world. It is traditionally associated with religion, specifically with the Abrahamic religions. In this form, it may include teachings both of the end of each individual human life and of the end of the world as a whole. But it has been applied to other fields as well, for example, in the form of physical eschatology, which includes scientifically based speculations about the far future of the universe. According to some models, there will be a Big Crunch in which the whole universe collapses back into a singularity, possibly resulting in a second Big Bang afterward. But current astronomical evidence seems to suggest that our universe will continue to expand indefinitely. World history studies the world from a historical perspective. Unlike other approaches to history, it employs a global viewpoint. It deals less with individual nations and civilizations, which it usually approaches at a high level of abstraction. Instead, it concentrates on wider regions and zones of interaction, often interested in how people, goods and ideas move from one region to another. It includes comparisons of different societies and civilizations as well as considering wide-ranging developments with a long-term global impact like the process of industrialization. Contemporary world history is dominated by three main research paradigms determining the periodization into different epochs. One is based on productive relations between humans and nature. The two most important changes in history in this respect were the introduction of agriculture and husbandry concerning the production of food, which started around 10,000 to 8,000 BCE and is sometimes termed the Neolithic Revolution, and the Industrial Revolution, which started around 1760 CE and involved the transition from manual to industrial manufacturing. Another paradigm, focusing on culture and religion instead, is based on Karl Jaspers' theories about the Axial Age, a time in which various new forms of religious and philosophical thoughts appeared in several separate parts of the world around the time between 800 and 200 BCE. A third periodization is based on the relations between civilizations and societies. According to this paradigm, history can be divided into three periods in relation to the dominant region in the world: Middle Eastern dominance before 500 BCE, Eurasian cultural balance until 1500 CE and Western dominance since 1500 CE. Big History employs an even wider framework than world history by putting human history into the context of the history of the universe as a whole. It starts with the Big Bang and traces the formation of galaxies, the Solar System, the Earth, its geological eras, the evolution of life and humans until the present day. World politics, also referred to as global politics or international relations, is the discipline of political science studying issues of interest to the world that transcend nations and continents. It aims to explain complex patterns found in the social world that are often related to the pursuit of power, order and justice, usually in the context of globalization. It focuses not just on the relations between nation-states but also considers other transnational actors, like multinational corporations, terrorist groups, or non-governmental organizations. For example, it tries to explain events such as the September 11 attacks, the 2003 invasion of Iraq or the 2008 financial crisis. Various theories have been proposed in order to deal with the complexity involved in formulating such explanations. These theories are sometimes divided into realism, liberalism and constructivism. Realists see nation-states as the main actors in world politics. They constitute an anarchical international system without any overarching power to control their behavior. They are seen as sovereign agents that, determined by human nature, act according to their national self-interest. Military force may play an important role in the ensuing struggle for power between states, but diplomacy and cooperation are also key mechanisms for nations to achieve their goals. Liberalists acknowledge the importance of states but they also emphasize the role of transnational actors, like the United Nations or the World Trade Organization. They see humans as perfectible and stress the role of democracy in this process. The emergent order in world politics, on this perspective, is more complex than a mere balance of power since more different agents and interests are involved in its production. Constructivism ascribes more importance to the agency of individual humans than realism and liberalism. It understands the social world as a construction of the people living in it. This leads to an emphasis on the possibility of change. If the international system is an anarchy of nation-states, as the realists hold, then this is only so because we made it this way and may change since this is not prefigured by human nature, according to the constructivists. See also References External links Africa Antarctica Asia Australia Europe North America South America Afro-Eurasia Americas Eurasia Oceania |
======================================== |
[SOURCE: https://www.bbc.com/news/articles/cn4gq352w89o] | [TOKENS: 1387] |
Tumbler Ridge suspect's ChatGPT account banned before shooting3 hours agoShareSaveOttilie MitchellShareSaveReutersOpenAI banned a ChatGPT account owned by the suspect of a mass shooting in British Columbia more than half a year before the attack took place.The AI company said they had identified an account owned by Jesse Van Rootselaar in June 2025 under abuse and enforcement detection, which includes identifying accounts being used to further violence.OpenAI said it did not alert authorities to the account because its usage did not meet its threshold of a credible or imminent plan for serious physical harm to others.It said its thoughts were with everyone affected by the tragedy and that following the attack it had "proactively" contacted Canadian police with information on the suspect.Van Rootselaar is suspected of having shot and killed eight people in rural Tumbler Ridge on 12 February in one of the deadliest attacks in Canada's history. According to the Wall Street Journal, which first reported the story, "about a dozen staffers debated whether to take action on Van Rootselaar's posts."Some had identified the suspect's usage of the AI tool as an indication of real world violence and encouraged leaders to alert authorities, the US outlet reported. But, it said, leaders of the company decided not to do so.In a statement, a spokesperson for OpenAI said: "In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities."They said the company would continue to support the police's investigation.The BBC has contacted the Royal Canadian Mounted Police for comment.OpenAI has said it will uphold its policy of alerting authorities only in cases of imminent risk because alerting them too broadly could cause unintended harm. It has also said that it trains ChatGPT to discourage imminent real-world harm when it identifies a dangerous situation and to refuse to help people that are attempting to use the service for illegal activities.The company added that it is constantly reviewing its referral criteria with experts and that it is reviewing the case for improvements. The deadly attack on Tumbler Ridge Secondary School saw a further 27 people injured.Van Rootselaar was found dead from a self-inflicted gunshot wound at the school. Police said the suspect was born a biological male but identified as a woman.Van Rootselaars's mother and step-brother were among the victims of the shooting. Both were found dead at a local residence, police said.The motive for the attack is not yet known.How the Tumbler Ridge school shooting unfoldedWho were the victims of the shootings in Tumbler Ridge, Canada?'Everyone knows somebody affected' - small Canadian town united in grief after mass shootingBritish ColumbiaCanadaArtificial intelligenceTechnology Tumbler Ridge suspect's ChatGPT account banned before shooting OpenAI banned a ChatGPT account owned by the suspect of a mass shooting in British Columbia more than half a year before the attack took place. The AI company said they had identified an account owned by Jesse Van Rootselaar in June 2025 under abuse and enforcement detection, which includes identifying accounts being used to further violence. OpenAI said it did not alert authorities to the account because its usage did not meet its threshold of a credible or imminent plan for serious physical harm to others. It said its thoughts were with everyone affected by the tragedy and that following the attack it had "proactively" contacted Canadian police with information on the suspect. Van Rootselaar is suspected of having shot and killed eight people in rural Tumbler Ridge on 12 February in one of the deadliest attacks in Canada's history. According to the Wall Street Journal, which first reported the story, "about a dozen staffers debated whether to take action on Van Rootselaar's posts." Some had identified the suspect's usage of the AI tool as an indication of real world violence and encouraged leaders to alert authorities, the US outlet reported. But, it said, leaders of the company decided not to do so. In a statement, a spokesperson for OpenAI said: "In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities." They said the company would continue to support the police's investigation. The BBC has contacted the Royal Canadian Mounted Police for comment. OpenAI has said it will uphold its policy of alerting authorities only in cases of imminent risk because alerting them too broadly could cause unintended harm. It has also said that it trains ChatGPT to discourage imminent real-world harm when it identifies a dangerous situation and to refuse to help people that are attempting to use the service for illegal activities. The company added that it is constantly reviewing its referral criteria with experts and that it is reviewing the case for improvements. The deadly attack on Tumbler Ridge Secondary School saw a further 27 people injured. Van Rootselaar was found dead from a self-inflicted gunshot wound at the school. Police said the suspect was born a biological male but identified as a woman. Van Rootselaars's mother and step-brother were among the victims of the shooting. Both were found dead at a local residence, police said. The motive for the attack is not yet known. How the Tumbler Ridge school shooting unfolded Who were the victims of the shootings in Tumbler Ridge, Canada? 'Everyone knows somebody affected' - small Canadian town united in grief after mass shooting 'Canadians are with you,' says PM at Tumbler Ridge vigil Who were the victims of the shootings in Tumbler Ridge, Canada? 'Everyone knows somebody affected' - small Canadian town united in grief after mass shooting What tariffs has Trump announced and why? Trump's volatile trade policy has thrown the world economy into chaos, and put some US prices up. 'Breweries using AI could put artists out of work' As two pubs in Newcastle ban AI art, artists discuss the impact it can have on creatives. Why fake AI videos of UK urban decline are taking over social media Deepfakes showing grim taxpayer-funded waterparks have gone viral and drawn some racist responses. Canada looks to trade talks after US Supreme Court tosses Trump's tariffs Canada, the US and Mexico are gearing up negotiations as part of a review of the USMCA this summer. Urgent research needed to tackle AI threats, says Google AI boss But the head of the US delegation at the AI Impact Summit in Delhi says: "We totally reject global governance of AI." Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_ref-3] | [TOKENS: 1793] |
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.