Number
int64 1
7.61k
| Text
stringlengths 2
3.11k
|
|---|---|
1,301
|
Software is a collection of programs and data that tell a computer how to perform specific tasks. Software often includes associated software documentation. This is in contrast to hardware, from which the system is built and which actually performs the work.
|
1,302
|
At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit or a graphics processing unit . Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example, displaying some text on a computer screen, causing state changes that should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction or is interrupted by the operating system. As of 2024, most personal computers, smartphone devices, and servers have processors with multiple execution units, or multiple processors performing computation together, so computing has become a much more concurrent activity than in the past.
|
1,303
|
The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler, an interpreter, or a combination of the two. Software may also be written in a low-level assembly language that has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler.
|
1,304
|
An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show how the engine would calculate Bernoulli numbers. Because of the proofs and the algorithm, she is considered the first computer programmer.
|
1,305
|
The first theory about software, prior to the creation of computers as we know them today, was proposed by Alan Turing in his 1936 essay, On Computable Numbers, with an Application to the Entscheidungsproblem . This eventually led to the creation of the academic fields of computer science and software engineering; both fields study software and its creation. Computer science is the theoretical study of computer and software , whereas software engineering is the application of engineering principles to development of software.
|
1,306
|
In 2000, Fred Shapiro, a librarian at the Yale Law School, published a letter revealing that John Wilder Tukey's 1958 paper "The Teaching of Concrete Mathematics" contained the earliest known usage of the term "software" found in a search of JSTOR's electronic archives, predating the Oxford English Dictionary's citation by two years. This led many to credit Tukey with coining the term, particularly in obituaries published that same year, although Tukey never claimed credit for any such coinage. In 1995, Paul Niquette claimed he had originally coined the term in October 1953, although he could not find any documents supporting his claim. The earliest known publication of the term "software" in an engineering context was in August 1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum.
|
1,307
|
On virtually all computer platforms, software can be grouped into a few broad categories.
|
1,308
|
Based on the goal, computer software can be divided into:
|
1,309
|
Programming tools are also software in the form of programs or applications that developers use to create, debug, maintain, or otherwise support software.
|
1,310
|
Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment , which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE.
|
1,311
|
People who use modern general purpose computers usually see three layers of software performing a variety of tasks: platform, application, and user software.
|
1,312
|
Computer software has to be "loaded" into the computer's storage . Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions.
|
1,313
|
Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together.
|
1,314
|
Software quality is very important, especially for commercial and system software. If software is faulty, it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs.
|
1,315
|
Many bugs are discovered and fixed through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that "every program has at least one more bug" . In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be large. Programs containing command software enable hardware engineering and system operations to function much easier together.
|
1,316
|
The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies.
|
1,317
|
Proprietary software can be divided into two types:
|
1,318
|
Open-source software comes with a free software license, granting the recipient the rights to modify and redistribute the software.
|
1,319
|
Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a detailed idea on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since all useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code.
|
1,320
|
Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for aspect-oriented programming , which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents.
|
1,321
|
Design and implementation of software vary depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the former has much more basic functionality.
|
1,322
|
Software is usually developed in integrated development environments like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software. As noted in a different section, software is usually created on top of existing software and the application programming interface that the underlying software provides like GTK+, JavaBeans or Swing. Libraries can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like Form1.Close and Form1.Show to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.
|
1,323
|
Data structures such as hash tables, arrays, and binary trees, and algorithms such as quicksort, can be useful for creating software.
|
1,324
|
Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods.
|
1,325
|
A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. More informal terms for programmer also exist such as "coder" and "hacker" – although use of the latter word may cause confusion, because it is more often used to mean someone who illegally breaks into computer systems.
|
1,326
|
A video game console is an electronic device that outputs a video signal or image to display a video game that can be played with a game controller. These may be home consoles, which are generally placed in a permanent location connected to a television or other display devices and controlled with a separate game controller, or handheld consoles, which include their own display unit and controller functions built into the unit and which can be played anywhere. Hybrid consoles combine elements of both home and handheld consoles.
|
1,327
|
Video game consoles are a specialized form of a home computer geared towards video game playing, designed with affordability and accessibility to the general public in mind, but lacking in raw computing power and customization. Simplicity is achieved in part through the use of game cartridges or other simplified methods of distribution, easing the effort of launching a game. However, this leads to ubiquitous proprietary formats that create competition for market share. More recent consoles have shown further confluence with home computers, making it easy for developers to release games on multiple platforms. Further, modern consoles can serve as replacements for media players with capabilities to play films and music from optical media or streaming media services.
|
1,328
|
Video game consoles are usually sold on a 5–7 year cycle called a generation, with consoles made with similar technical capabilities or made around the same time period grouped into one generation. The industry has developed a razor and blades model: manufacturers often sell consoles at low prices, sometimes at a loss, while primarily making a profit from the licensing fees for each game sold. Planned obsolescence then draws consumers into buying the next console generation. While numerous manufacturers have come and gone in the history of the console market, there have always been two or three dominant leaders in the market, with the current market led by Sony , Microsoft , and Nintendo . Previous console developers include Sega, Atari, Coleco, Mattel, NEC, SNK, Fujitsu, and 3D0.
|
1,329
|
The first video game consoles were produced in the early 1970s. Ralph H. Baer devised the concept of playing simple, spot-based games on a television screen in 1966, which later became the basis of the Magnavox Odyssey in 1972. Inspired by the table tennis game on the Odyssey, Nolan Bushnell, Ted Dabney, and Allan Alcorn at Atari, Inc. developed the first successful arcade game, Pong, and looked to develop that into a home version, which was released in 1975. The first consoles were capable of playing only a very limited number of games built into the hardware. Programmable consoles using swappable ROM cartridges were introduced with the Fairchild Channel F in 1976, though popularized with the Atari 2600 released in 1977.
|
1,330
|
Handheld consoles emerged from technology improvements in handheld electronic games as these shifted from mechanical to electronic/digital logic, and away from light-emitting diode indicators to liquid-crystal displays that resembled video screens more closely. Early examples include the Microvision in 1979 and Game & Watch in 1980, and the concept was fully realized by the Game Boy in 1989.
|
1,331
|
Both home and handheld consoles have become more advanced following global changes in technology. These technological shifts include including improved electronic and computer chip manufacturing to increase computational power at lower costs and size, the introduction of 3D graphics and hardware-based graphic processors for real-time rendering, digital communications such as the Internet, wireless networking and Bluetooth, and larger and denser media formats as well as digital distribution.
|
1,332
|
Following the same type of Moore's law progression, home consoles are grouped into generations; each lasting approximately five years. Consoles within each generation share similar specifications and features, such as processor word size. While no one grouping of consoles by generation is universally accepted, one breakdown of generations, showing representative consoles, of each is shown below.
|
1,333
|
Most consoles are considered programmable consoles and have the means for the player to switch between different games. Traditionally, this has been done by switching a physical game cartridge or game card or through using optical media. It is now common to download games through digital distribution and store them on internal or external digital storage devices.
|
1,334
|
Dedicated consoles were very popular in the first generation until they were gradually replaced by second generation that use ROM cartridges. The fourth generation gradually merged to optical media. It is now common to download games through digital distribution and store them on internal or external digital storage devices.
|
1,335
|
Some consoles are considered dedicated consoles, in which games available for the console are "baked" onto the hardware, either by being programmed via the circuitry or set in the read-only flash memory of the console. Thus, the console's game library cannot be added to or changed directly by the user. The user can typically switch between games on dedicated consoles using hardware switches on the console, or through in-game menus. Dedicated consoles were common in the first generation of home consoles, such as the Magnavox Odyssey and the home console version of Pong, and more recently have been used for retro-consoles such as the NES Classic Edition and Sega Genesis Mini.
|
1,336
|
Home video game consoles are meant to be connected to a television or other type of monitor, with power supplied through an outlet. This requires the unit to be used in a fixed location, typically at home in one's living room. Separate game controllers, connected through wired or wireless connections, are used to provide input to the game. Early examples include the Atari 2600, the Nintendo Entertainment System, and the Sega Genesis; newer examples include the Wii U, the PlayStation 5, and the Xbox Series X. Specific types of home consoles include:
|
1,337
|
Handheld game consoles are devices that typically include a built-in screen and game controller in their case, and contain a rechargeable battery or battery compartment. This allows the unit to be carried around and played anywhere, in contrast to a home game console. Examples include the Game Boy, the PlayStation Portable, and the Nintendo 3DS.
|
1,338
|
Hybrid video game consoles are devices that can be used either as a handheld or as a home console. They have either a wired connection or docking station that connects the console unit to a television screen and fixed power source, and the potential to use a separate controller. However, they can also be used as a handheld. While prior handhelds like the Sega Nomad and PlayStation Portable, or home consoles such as the Wii U, have had these features, some consider the Nintendo Switch to be the first true hybrid console.
|
1,339
|
A microconsole is a home video game console that is typically powered by low-cost computing hardware, making the console lower-priced compared to other home consoles on the market. The majority of microconsoles, with a few exceptions such as the PlayStation TV and OnLive Game System, are Android-based digital media players that are bundled with gamepads and marketed as gaming devices. Such microconsoles can be connected to the television to play video games downloaded from an application store such as Google Play.
|
1,340
|
During the later part of video game history, there have been specialized consoles using computing components to offer multiple games to players. Most of these plug directly into one's television, and thus are often called plug-and-play consoles. They are also considered dedicated consoles since it is generally impossible to access the computing components by an average consumer, though tech-savvy consumers often have found ways to hack the console to install additional functionality, voiding the manufacturer's warranty. Plug-and-play consoles usually come with the console unit itself, one or more controllers, and the required components for power and video hookup. Many recent plug-and-play releases have been for distributing a number of retro games for a specific console platform. Examples of these include the Atari Flashback series, the NES Classic Edition, Sega Genesis Mini and also handheld retro console such as Nintendo Game & Watch color screen series.
|
1,341
|
Early console hardware was designed as customized printed circuit boards s, selecting existing integrated circuit chips that performed known functions, or programmable chips like erasable programmable read-only memory chips that could perform certain functions. Persistent computer memory was expensive, so dedicated consoles were generally limited to the use of processor registers for storage of the state of a game, thus limiting the complexities of such titles. Pong in both its arcade and home format had a handful of logic and calculation chips that used the current input of the players' paddles and resisters storing the ball's position to update the game's state and sent to the display device. Even with more advanced integrated circuits s of the time, designers were limited to what could be done through the electrical process rather than through programming as normally associated with video game development.
|
1,342
|
Improvements in console hardware followed with improvements in microprocessor technology and semiconductor device fabrication. Manufacturing processes have been able to reduce the feature size on chips , allowing more transistors and other components to fit on a chip, and at the same time increasing the circuit speeds and the potential frequency the chip can run at, as well as reducing thermal dissipation. Chips were able to be made on larger dies, further increasing the number of features and effective processing power. Random-access memory became more practical with the higher density of transistors per chip, but to address the correct blocks of memory, processors needed to be updated to use larger word sizes and allot for larger bandwidth in chip communications. All these improvements did increase the cost of manufacturing but at a rate far less than the gains in overall processing power, which helped to make home computers and consoles inexpensive for the consumer, all related to Moore's law of technological improvements.
|
1,343
|
For the consoles of the 1980s to 1990s, these improvements were evident in the marketing in the late 1980s to 1990s during the "bit wars", where console manufacturers had focused on their console's processor's word size as a selling point. Consoles since the 2000s are more similar to personal computers, building in memory, storage features, and networking capabilities to avoid the limitations of the past. The confluence with personal computers eased software development for both computer and console games, allowing developers to target both platforms. However, consoles differ from computers as most of the hardware components are preselected and customized between the console manufacturer and hardware component provider to assure a consistent performance target for developers. Whereas personal computer motherboards are designed with the needs for allowing consumers to add their desired selection of hardware components, the fixed set of hardware for consoles enables console manufacturers to optimize the size and design of the motherboard and hardware, often integrating key hardware components into the motherboard circuitry itself. Often, multiple components such as the central processing unit and graphics processing unit can be combined into a single chip, otherwise known as a system on a chip , which is a further reduction in size and cost. In addition, consoles tend to focus on components that give the unit high game performance such as the CPU and GPU, and as a tradeoff to keep their prices in expected ranges, use less memory and storage space compared to typical personal computers.
|
1,344
|
In comparison to the early years of the industry, where most consoles were made directly by the company selling the console, many consoles of today are generally constructed through a value chain that includes component suppliers, such as AMD and NVidia for CPU and GPU functions, and contract manufacturers including electronics manufacturing services, factories which assemble those components into the final consoles such as Foxconn and Flextronics. Completed consoles are then usually tested, distributed, and repaired by the company itself. Microsoft and Nintendo both use this approach to their consoles, while Sony maintains all production in-house with the exception of their component suppliers.
|
1,345
|
Some of the commons elements that can be found within console hardware include:
|
1,346
|
All game consoles require player input through a game controller to provide a method to move the player character in a specific direction and a variation of buttons to perform other in-game actions such as jumping or interacting with the game world. Though controllers have become more featured over the years, they still provide less control over a game compared to personal computers or mobile gaming. The type of controller available to a game can fundamentally change the style of how a console game will or can be played. However, this has also inspired changes in game design to create games that accommodate for the comparatively limited controls available on consoles.
|
1,347
|
Controllers have come in a variety of styles over the history of consoles. Some common types include:
|
1,348
|
Numerous other controller types exist, including those that support motion controls, touchscreen support on handhelds and some consoles, and specialized controllers for specific types of games, such as racing wheels for racing games, light guns for shooting games, and musical instrument controllers for rhythm games. Some newer consoles also include optional support for a mouse and keyboard devices. Some older consoles such as 1988 Sega Genesis aka Mega Drive and 1993 3DO Interactive Multiplayer, supported optional mice, both with special mice made for them, but the 3DO mouse like that console was a flop, and the mouse for the Sega had very limited game support. The Sega also supported the optional Menacer, a wireless infrared light gun, and such were at one point popular for games. It also support BatterUP, a baseball bat-shaped controller.
|
1,349
|
A controller may be attached through a wired connection onto the console itself, or in some unique cases like the Famicom hardwired to the console, or with a wireless connection. Controllers require power, either provided by the console via the wired connection, or from batteries or a rechargeable battery pack for wireless connections. Controllers are nominally built into a handheld unit, though some newer ones allow for separate wireless controllers to also be used.
|
1,350
|
While the first game consoles were dedicated game systems, with the games programmed into the console's hardware, the Fairchild Channel F introduced the ability to store games in a form separate from the console's internal circuitry, thus allowing the consumer to purchase new games to play on the system. Since the Channel F, nearly all game consoles have featured the ability to purchase and swap games through some form, through those forms have changes with improvements in technology.
|
1,351
|
While magnetic storage, such as tape drives and floppy disks, had been popular for software distribution with early personal computers in the 1980s and 1990s, this format did not see much use in console system. There were some attempts, such as the Bally Astrocade and APF-M1000 using tape drives, as well as the Disk System for the Nintendo Famicom, and the Nintendo 64DD for the Nintendo 64, but these had limited applications, as magnetic media was more fragile and volatile than game cartridges.
|
1,352
|
In addition to built-in internal storage, newer consoles often give the consumer the ability to use external storage media to save game date, downloaded games, or other media files from the console. Early iterations of external storage were achieved through the use of flash-based memory cards, first used by the Neo Geo but popularized with the PlayStation. Nintendo continues to support this approach with extending the storage capabilities of the 3DS and Switch, standardizing on the current SD card format. As consoles began incorporating the use of USB ports, support for USB external hard drives was also added, such as with the Xbox 360.
|
1,353
|
With Internet-enabled consoles, console manufacturers offer both free and paid-subscription services that provide value-added services atop the basic functions of the console. Free services generally offer user identity services and access to a digital storefront, while paid services allow players to play online games, interact with other uses through social networking, use cloud saves for supported games, and gain access to free titles on a rotating basis. Examples of such services include the Xbox network, PlayStation Network, and Nintendo Switch Online.
|
1,354
|
Certain consoles saw various add-ons or accessories that were designed to attach to the existing console to extend its functionality. The best example of this was through the various CD-ROM add-ons for consoles of the fourth generation such as the TurboGrafx CD, Atari Jaguar CD, and the Sega CD. Other examples of add-ons include the 32X for the Sega Genesis intended to allow owners of the aging console to play newer games but has several technical faults, and the Game Boy Player for the GameCube to allow it to play Game Boy games.
|
1,355
|
Consumers can often purchase a range of accessories for consoles outside of the above categories. These can include:
|
1,356
|
Console or game development kits are specialized hardware units that typically include the same components as the console and additional chips and components to allow the unit to be connected to a computer or other monitoring device for debugging purposes. A console manufacturer will make the console's dev kit available to registered developers months ahead of the console's planned launch to give developers time to prepare their games for the new system. These initial kits will usually be offered under special confidentiality clauses to protect trade secrets of the console's design, and will be sold at a high cost to the developer as part of keeping this confidentiality. Newer consoles that share features in common with personal computers may no longer use specialized dev kits, though developers are still expected to register and purchase access to software development kits from the manufacturer. For example, any consumer Xbox One can be used for game development after paying a fee to Microsoft to register one intent to do so.
|
1,357
|
Since the release of the Nintendo Famicom / Nintendo Entertainment System, most video game console manufacturers employ a strict licensing scheme that limit what games can be developed for it. Developers and their publishers must pay a fee, typically based on royalty per unit sold, back to the manufacturer. The cost varies by manufacturer but was estimated to be about US$3−10 per unit in 2012. With additional fees, such as branding rights, this has generally worked out to be an industry-wide 30% royalty rate paid to the console manufacturer for every game sold. This is in addition to the cost of acquiring the dev kit to develop for the system.
|
1,358
|
The licensing fee may be collected in a few different ways. In the case of Nintendo, the company generally has controlled the production of game cartridges with its lockout chips and optical media for its systems, and thus charges the developer or publisher for each copy it makes as an upfront fee. This also allows Nintendo to review the game's content prior to release and veto games it does not believe appropriate to include on its system. This had led to over 700 unlicensed games for the NES, and numerous others on other Nintendo cartridge-based systems that had found ways to bypass the hardware lockout chips and sell without paying any royalties to Nintendo, such as by Atari in its subsidiary company Tengen. This licensing approach was similarly used by most other cartridge-based console manufacturers using lockout chip technology.
|
1,359
|
With optical media, where the console manufacturer may not have direct control on the production of the media, the developer or publisher typically must establish a licensing agreement to gain access to the console's proprietary storage format for the media as well as to use the console and manufacturer's logos and branding for the game's packaging, paid back through royalties on sales. In the transition to digital distribution, where now the console manufacturer runs digital storefronts for games, license fees apply to registering a game for distribution on the storefront – again gaining access to the console's branding and logo – with the manufacturer taking its cut of each sale as its royalty. In both cases, this still gives console manufacturers the ability to review and reject games it believes unsuitable for the system and deny licensing rights.
|
1,360
|
With the rise of indie game development, the major console manufacturers have all developed entry level routes for these smaller developers to be able to publish onto consoles at far lower costs and reduced royalty rates. Programs like Microsoft's ID@Xbox give developers most of the needed tools for free after validating the small development size and needs of the team.
|
1,361
|
Similar licensing concepts apply for third-party accessory manufacturers.
|
1,362
|
Consoles, like most consumer electronic devices, have limited lifespans. There is great interest in preservation of older console hardware for archival and historical purposes, as games from older consoles, as well as arcade and personal computers, remain of interest. Computer programmers and hackers have developed emulators that can be run on personal computers or other consoles that simulate the hardware of older consoles that allow games from that console to be run. The development of software emulators of console hardware is established to be legal, but there are unanswered legal questions surrounding copyrights, including acquiring a console's firmware and copies of a game's ROM image, which laws such as the United States' Digital Millennium Copyright Act make illegal save for certain archival purposes. Even though emulation itself is legal, Nintendo is recognized to be highly protective of any attempts to emulate its systems and has taken early legal actions to shut down such projects.
|
1,363
|
To help support older games and console transitions, manufacturers started to support backward compatibility on consoles in the same family. Sony was the first to do this on a home console with the PlayStation 2 which was able to play original PlayStation content, and subsequently became a sought-after feature across many consoles that followed. Backward compatibility functionality has included direct support for previous console games on the newer consoles such as within the Xbox console family, the distribution of emulated games such as Nintendo's Virtual Console, or using cloud gaming services for these older games as with the PlayStation Now service.
|
1,364
|
Consoles may be shipped in a variety of configurations, but typically will include one base configuration that include the console, one controller, and sometimes a pack-in game. Manufacturers may offer alternate stock keeping unit options that include additional controllers and accessories or different pack-in games. Special console editions may feature unique cases or faceplates with art dedicated to a specific video game or series and are bundled with that game as a special incentive for its fans. Pack-in games are typically first-party games, often featuring the console's primary mascot characters.
|
1,365
|
The more recent console generations have also seen multiple versions of the same base console system either offered at launch or presented as a mid-generation refresh. In some cases, these simply replace some parts of the hardware with cheaper or more efficient parts, or otherwise streamline the console's design for production going forward; the PlayStation 3 underwent several such hardware refreshes during its lifetime due to technological improvements such as significant reduction of the process node size for the CPU and GPU. In these cases, the hardware revision model will be marked on packaging so that consumers can verify which version they are acquiring.
|
1,366
|
In other cases, the hardware changes create multiple lines within the same console family. The base console unit in all revisions share fundamental hardware, but options like internal storage space and RAM size may be different. Those systems with more storage and RAM would be marked as a higher performance variant available at a higher cost, while the original unit would remain as a budget option. For example, within the Xbox One family, Microsoft released the mid-generation Xbox One X as a higher performance console, the Xbox One S as the lower-cost base console, and a special Xbox One S All-Digital Edition revision that removed the optical drive on the basis that users could download all games digitally, offered at even a lower cost than the Xbox One S. In these cases, developers can often optimize games to work better on the higher-performance console with patches to the retail version of the game. In the case of the Nintendo 3DS, the New Nintendo 3DS, featured upgraded memory and processors, with new games that could only be run on the upgraded units and cannot be run on an older base unit. There have also been a number of "slimmed-down" console options with significantly reduced hardware components that significantly reduced the price they could sell the console to the consumer, but either leaving certain features off the console, such as the Wii Mini that lacked any online components compared to the Wii, or that required the consumer to purchase additional accessories and wiring if they did not already own it, such as the New-Style NES that was not bundled with the required RF hardware to connect to a television.
|
1,367
|
Consoles when originally launched in the 1970s and 1980s were about US$200−300, and with the introduction of the ROM cartridge, each game averaged about US$30−40. Over time the launch price of base consoles units has generally risen to about US$400−500, with the average game costing US$60. Exceptionally, the period of transition from ROM cartridges to optical media in the early 1990s saw several consoles with high price points exceeding US$400 and going as high as US$700. Resultingly, sales of these first optical media consoles were generally poor.
|
1,368
|
When adjusted for inflation, the price of consoles has generally followed a downward trend, from US$800−1,000 from the early generations down to US$500−600 for current consoles. This is typical for any computer technology, with the improvements in computing performance and capabilities outpacing the additional costs to achieve those gains. Further, within the United States, the price of consoles has generally remained consistent, being within 0.8% to 1% of the median household income, based on the United States Census data for the console's launch year.
|
1,369
|
Since the Nintendo Entertainment System, console pricing has stabilized on the razorblade model, where the consoles are sold at little to no profit for the manufacturer, but they gain revenue from each game sold due to console licensing fees and other value-added services around the console . Console manufacturers have even been known to take losses on the sale of consoles at the start of a console's launch with expectation to recover with revenue sharing and later price recovery on the console as they switch to less expensive components and manufacturing processes without changing the retail price. Consoles have been generally designed to have a five-year product lifetime, though manufacturers have considered their entries in the more recent generations to have longer lifetimes of seven to potentially ten years.
|
1,370
|
The competition within the video game console market as subset of the video game industry is an area of interest to economics with its relatively modern history, its rapid growth to rival that of the film industry, and frequent changes compared to other sectors.
|
1,371
|
Effects of unregulated competition on the market were twice seen early in the industry. The industry had its first crash in 1977 following the release of the Magnavox Odyssey, Atari's home versions of Pong and the Coleco Telstar, which led other third-party manufacturers, using inexpensive General Instruments processor chips, to make their own home consoles which flooded the market by 1977.: 81–89 The video game crash of 1983 was fueled by multiple factors including competition from lower-cost personal computers, but unregulated competition was also a factor, as numerous third-party game developers, attempting to follow on the success of Activision in developing third-party games for the Atari 2600 and Intellivision, flooded the market with poor quality games, and made it difficult for even quality games to sell. Nintendo implemented a lockout chip, the Checking Integrated Circuit, on releasing the Nintendo Entertainment System in Western territories, as a means to control which games were published for the console. As part of their licensing agreements, Nintendo further prevented developers from releasing the same game on a different console for a period of two years. This served as one of the first means of securing console exclusivity for games that existed beyond technical limitation of console development.
|
1,372
|
The Nintendo Entertainment System also brought the concept of a video game mascot as the representation of a console system as a means to sell and promote the unit, and for the NES was Mario. The use of mascots in businesses had been a tradition in Japan, and this had already proven successful in arcade games like Pac-Man. Mario was used to serve as an identity for the NES as a humor-filled, playful console. Mario caught on quickly when the NES released in the West, and when the next generation of consoles arrived, other manufacturers pushed their own mascots to the forefront of their marketing, most notably Sega with the use of Sonic the Hedgehog. The Nintendo and Sega rivalry that involved their mascot's flagship games served as part of the fourth console generation's "console wars". Since then, manufacturers have typically positioned their mascot and other first-party games as key titles in console bundles used to drive sales of consoles at launch or at key sales periods such as near Christmas.
|
1,373
|
Another type of competitive edge used by console manufacturers around the same time was the notion of "bits" or the size of the word used by the main CPU. The TurboGrafx-16 was the first console to push on its bit-size, advertising itself as a "16-bit" console, though this only referred to part of its architecture while its CPU was still an 8-bit unit. Despite this, manufacturers found consumers became fixated on the notion of bits as a console selling point, and over the fourth, fifth and sixth generation, these "bit wars" played heavily into console advertising. The use of bits waned as CPU architectures no longer needed to increase their word size and instead had other means to improve performance such as through multicore CPUs.
|
1,374
|
Generally, increased console numbers gives rise to more consumer options and better competition, but the exclusivity of titles made the choice of console for consumers an "all-or-nothing" decision for most. Further, with the number of available consoles growing with the fifth and sixth generations, game developers became pressured to which systems to focus on, and ultimately narrowed their target choice of platforms to those that were the best-selling. This cased a contraction in the market, with major players like Sega leaving the hardware business after the Dreamcast but continuing in the software area. Effectively, each console generation was shown to have two or three dominant players.
|
1,375
|
Competition in the console market in the 2010s and 2020s is considered an oligarchy between three main manufacturers: Nintendo, Sony, and Microsoft. The three use a combination of first-party games exclusive to their console and negotiate exclusive agreements with third-party developers to have their games be exclusive for at least an initial period of time to drive consumers to their console. They also worked with CPU and GPU manufacturers to tune and customize hardware for computers to make it more amenable and effective for video games, leading to lower-cost hardware needed for video game consoles. Finally, console manufacturers also work with retailers to help with promotion of consoles, games, and accessories. While there is little difference in pricing on the console hardware from the manufacturer's suggested retail price for the retailer to profit from, these details with the manufacturers can secure better profits on sales of game and accessory bundles for premier product placement. These all form network effects, with each manufacturer seeking to maximize the size of their network of partners to increase their overall position in the competition.
|
1,376
|
Of the three, Microsoft and Sony, both with their own hardware manufacturing capabilities, remain at a leading edge approach, attempting to gain a first-mover advantage over the other with adaption of new console technology. Nintendo is more reliant on its suppliers and thus instead of trying to compete feature for feature with Microsoft and Sony, had instead taken a "blue ocean" strategy since the Nintendo DS and Wii.
|
1,377
|
In computer science, a high-level programming language is a programming language with strong abstraction from the details of the computer. In contrast to low-level programming languages, it may use natural language elements, be easier to use, or may automate significant areas of computing systems , making the process of developing a program simpler and more understandable than when using a lower-level language. The amount of abstraction provided defines how "high-level" a programming language is.
|
1,378
|
Note that languages are not strictly interpreted languages or compiled languages. Rather, implementations of language behavior use interpreting or compiling. For example, ALGOL 60 and Fortran have both been interpreted . Similarly, Java shows the difficulty of trying to apply these labels to languages, rather than to implementations; Java is compiled to bytecode which is then executed by either interpreting ) or compiling . Moreover, compiling, transcompiling, and interpreting is not strictly limited to only a description of the compiler artifact .
|
1,379
|
Alternatively, it is possible for a high-level language to be directly implemented by a computer – the computer directly executes the HLL code. This is known as a high-level language computer architecture – the computer architecture itself is designed to be targeted by a specific high-level language. The Burroughs large systems were target machines for ALGOL 60, for example.
|
1,380
|
x86 is a family of complex instruction set computer instruction set architectures initially developed by Intel based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a fully 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address. The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors. Colloquially, their names were "186", "286", "386" and "486".
|
1,381
|
The term is not synonymous with IBM PC compatibility, as this implies a multitude of other computer hardware. Embedded systems and general-purpose computers used x86 chips before the PC-compatible market started, some of them before the IBM PC debut.
|
1,382
|
As of June 2022, most desktop and laptop computers sold are based on the x86 architecture family, while mobile categories such as smartphones or tablets are dominated by ARM. At the high end, x86 continues to dominate computation-intensive workstation and cloud computing segments. The fastest supercomputer in the TOP500 list for June 2022 was the first exascale system, Frontier, built using AMD Epyc CPUs based on the x86 ISA; it broke the 1 exaFLOPS barrier in May 2022.
|
1,383
|
In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, the term x86 usually represented any 8086-compatible CPU. Today, however, x86 usually implies a binary compatibility also with the 32-bit instruction set of the 80386. This is due to the fact that this instruction set has become something of a lowest common denominator for many modern operating systems and probably also because the term became common after the introduction of the 80386 in 1985.
|
1,384
|
A few years after the introduction of the 8086 and 8088, Intel added some complexity to its naming scheme and terminology as the "iAPX" of the ambitious but ill-fated Intel iAPX 432 processor was tried on the more successful 8086 family of chips, applied as a kind of system-level prefix. An 8086 system, including coprocessors such as 8087 and 8089, and simpler Intel-specific system chips, was thereby described as an iAPX 86 system. There were also terms iRMX , iSBC , and iSBX , all together under the heading Microsystem 80. However, this naming scheme was quite temporary, lasting for a few years during the early 1980s.
|
1,385
|
Although the 8086 was primarily developed for embedded systems and small multi-user or single-user computers, largely as a response to the successful 8080-compatible Zilog Z80, the x86 line soon grew in features and processing power. Today, x86 is ubiquitous in both stationary and portable personal computers, and is also used in midrange computers, workstations, servers, and most new supercomputer clusters of the TOP500 list. A large amount of software, including a large list of x86 operating systems are using x86-based hardware.
|
1,386
|
Modern x86 is relatively uncommon in embedded systems, however, and small low power applications , and low-cost microprocessor markets, such as home appliances and toys, lack significant x86 presence. Simple 8- and 16-bit based architectures are common here, as well as simpler RISC architectures like RISC-V, although the x86-compatible VIA C7, VIA Nano, AMD's Geode, Athlon Neo and Intel Atom are examples of 32- and 64-bit designs used in some relatively low-power and low-cost segments.
|
1,387
|
There have been several attempts, including by Intel, to end the market dominance of the "inelegant" x86 architecture designed directly from the first simple 8-bit microprocessors. Examples of this are the iAPX 432 , the Intel 960, Intel 860 and the Intel/Hewlett-Packard Itanium architecture. However, the continuous refinement of x86 microarchitectures, circuitry and semiconductor manufacturing would make it hard to replace x86 in many segments. AMD's 64-bit extension of x86 and the scalability of x86 chips in the form of modern multi-core CPUs, is underlining x86 as an example of how continuous refinement of established industry standards can resist the competition from completely new architectures.
|
1,388
|
The table below lists processor models and model series implementing various architectures in the x86 family, in chronological order. Each line item is characterized by significantly improved or commercially successful processor microarchitecture designs.
|
1,389
|
At various times, companies such as IBM, VIA, NEC, AMD, TI, STM, Fujitsu, OKI, Siemens, Cyrix, Intersil, C&T, NexGen, UMC, and DM&P started to design or manufacture x86 processors intended for personal computers and embedded systems. Other companies that designed or manufactured x86 or x87 processors include ITT Corporation, National Semiconductor, ULSI System Technology, and Weitek.
|
1,390
|
Such x86 implementations were seldom simple copies but often employed different internal microarchitectures and different solutions at the electronic and physical levels. Quite naturally, early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors, often named similarly to Intel's original chips.
|
1,391
|
After the fully pipelined i486, in 1993 Intel introduced the Pentium brand name for their new set of superscalar x86 designs. With the x86 naming scheme now legally cleared, other x86 vendors had to choose different names for their x86-compatible products, and initially some chose to continue with variations of the numbering scheme: IBM partnered with Cyrix to produce the 5x86 and then the very efficient 6x86 and 6x86MX lines of Cyrix designs, which were the first x86 microprocessors implementing register renaming to enable speculative execution.
|
1,392
|
AMD meanwhile designed and manufactured the advanced but delayed 5k86 , which, internally, was closely based on AMD's earlier 29K RISC design; similar to NexGen's Nx586, it used a strategy such that dedicated pipeline stages decode x86 instructions into uniform and easily handled micro-operations, a method that has remained the basis for most x86 designs to this day.
|
1,393
|
Some early versions of these microprocessors had heat dissipation problems. The 6x86 was also affected by a few minor compatibility problems, the Nx586 lacked a floating-point unit and pin-compatibility, while the K5 had somewhat disappointing performance when it was introduced.
|
1,394
|
Customer ignorance of alternatives to the Pentium series further contributed to these designs being comparatively unsuccessful, despite the fact that the K5 had very good Pentium compatibility and the 6x86 was significantly faster than the Pentium on integer code. AMD later managed to grow into a serious contender with the K6 set of processors, which gave way to the very successful Athlon and Opteron.
|
1,395
|
There were also other contenders, such as Centaur Technology , Rise Technology, and Transmeta. VIA Technologies' energy efficient C3 and C7 processors, which were designed by the Centaur company, were sold for many years following their release in 2005. Centaur's 2008 design, the VIA Nano, was their first processor with superscalar and speculative execution. It was introduced at about the same time as Intel introduced the Intel Atom, its first "in-order" processor after the P5 Pentium.
|
1,396
|
Many additions and extensions have been added to the original x86 instruction set over the years, almost consistently with full backward compatibility. The architecture family has been implemented in processors from Intel, Cyrix, AMD, VIA Technologies and many other companies; there are also open implementations, such as the Zet SoC platform . Nevertheless, of those, only Intel, AMD, VIA Technologies, and DM&P Electronics hold x86 architectural licenses, and from these, only the first two actively produce modern 64-bit designs, leading to what has been called a "duopoly" of Intel and AMD in x86 processors.
|
1,397
|
However, in 2014 the Shanghai-based Chinese company Zhaoxin, a joint venture between a Chinese company and VIA Technologies, began designing VIA based x86 processors for desktops and laptops. The release of its newest "7" family of x86 processors , which are not quite as fast as AMD or Intel chips but are still state of the art, had been planned for 2021; as of March 2022 the release had not taken place, however.
|
1,398
|
The instruction set architecture has twice been extended to a larger word size. In 1985, Intel released the 32-bit 80386 which gradually replaced the earlier 16-bit chips in computers during the following years; this extended programming model was originally referred to as the i386 architecture but Intel later dubbed it IA-32 when introducing its IA-64 architecture.
|
1,399
|
In 1999–2003, AMD extended this 32-bit architecture to 64 bits and referred to it as x86-64 in early documents and later as AMD64. Intel soon adopted AMD's architectural extensions under the name IA-32e, later using the name EM64T and finally using Intel 64. Microsoft and Sun Microsystems/Oracle also use term "x64", while many Linux distributions, and the BSDs also use the "amd64" term. Microsoft Windows, for example, designates its 32-bit versions as "x86" and 64-bit versions as "x64", while installation files of 64-bit Windows versions are required to be placed into a directory called "AMD64".
|
1,400
|
In 2023, Intel proposed a major change to the architecture referred to as x86-S , which aims to remove support for legacy execution modes and instructions. A processor implementing this proposal would start execution directly in long mode and would only support 64-bit operating systems. 32-bit code would only be supported for user applications running in ring 3, and would use the same simplified segmentation as long mode.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.