text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Reciprocity_(social_and_political_philosophy)] | [TOKENS: 4450] |
Contents Reciprocity (social and political philosophy) The social norm of reciprocity is the expectation that people will respond to each other in similar ways—responding to gifts and kindnesses from others with similar benevolence of their own, and responding to harmful, hurtful acts from others with either indifference or some form of retaliation. Such norms can be crude and mechanical, such as a literal reading of the eye-for-an-eye rule lex talionis, or they can be complex and sophisticated, such as a subtle understanding of how anonymous donations to an international organization can be a form of reciprocity for the receipt of very personal benefits, such as the love of a parent. The norm of reciprocity varies widely in its details from situation to situation, and from society to society. Anthropologists and sociologists have often claimed, however, that having some version of the norm appears to be a social inevitability. Reciprocity figures prominently in social exchange theory, evolutionary psychology, social psychology, cultural anthropology and rational choice theory. Patterns of reciprocity One-to-one reciprocity. Some reciprocal relationships are direct one-to-one arrangements between individuals, or between institutions, or between governments. Some of these are one-time arrangements, and others are embedded in long-term relationships. Families often have expectations that children will reciprocate for the care they receive as infants by caring for their elderly parents; businesses may have long-term contractual obligations with each other: governments make treaties with each other. There are also one-to-one reciprocal relationships that are indirect. For example, there are sometimes long chains of exchanges, in which A gives a benefit to B, who passes on a similar benefit to C, and so on, in which each party in the chain expects that what goes around will eventually come back around. The classic anthropological example is the Kula exchange in the Trobriand Islands of Papua New Guinea. One-to-many and many-to-one reciprocity often lies somewhere between direct reciprocal arrangements and generalized reciprocity. Informal clubs in which the hosting arrangements circulate among members are examples of the one-to-many variety. Bridal showers are examples of the many-to-one variety. So are barn raising practices in some frontier communities. All of these are similar to direct reciprocity, since the beneficiaries are identified as such in each case, and contributors know exactly what they can expect in return. But because membership in the group changes, and needs for new meetings or marriages or barns are not always predictable, these cases differ significantly from precisely defined one-to-one cases. Generalized reciprocity is even less precise. Here donors operate within a large network of social transactions largely unknown to each other, and without expectations about getting specific benefits in return — other than, perhaps, the sort of social insurance provided by the continuance of the network itself. Recipients may not know the donors, and may not themselves be able to make a return in-kind to that network, but perhaps feel obligated to make a return to a similar network. Blood banks and food banks are examples. But in fact any stable social structure in which there is a division of labor will involve a system of reciprocal exchanges of this generalized sort, as a way of sustaining social norms. All of these patterns of reciprocity, along with related ideas such as gratitude, have been central to social and political philosophy from Plato onward. Reciprocity is mentioned in Aristotle's Nicomachean Ethics at Book 5, Chapter 5, Line 1: "Some think that reciprocity is without qualification just, as the Pythagoreans said", meaning that "Should a man suffer what he did, right justice would be done". Aristotle is stating the problems of this approach. And later he concludes that "…for this is characteristic of grace – we should serve in return one who has shown grace to us, and should another time take the initiative in showing it", and continues further with a formula of proportionate return. These philosophical discussions concern the ways in which patterns and norms of reciprocity might have a role in theories of justice, stable and productive social systems, healthy personal relationships, and ideals for human social life generally. The concept of reciprocity Philosophical work on reciprocity often pays considerable attention, directly or indirectly, to the proper interpretation of one or more of the following conceptual issues. Reciprocity as distinct from related ideas. In Plato's Crito, Socrates considers whether citizens might have a duty of gratitude to obey the laws of the state, in much the way they have duties of gratitude to their parents. Many other philosophers have considered similar questions. (See the references below to Sidgwick, English, and Jecker for modern examples.) This is certainly a legitimate question. Charging a child or a citizen with ingratitude can imply a failure to meet a requirement. But confining the discussion to gratitude is limiting. There are similar limitations in discussions of the do-unto-others golden rule, or ethical principles that are modeled on the mutuality and mutual benevolence that come out of the face-to-face relations envisaged by Emmanuel Levinas or the I-Thou relationships described by Martin Buber. Like gratitude, these other ideas have things in common with the norm of reciprocity, but are quite distinct from it. Gratitude, in its ordinary sense, is as much about having warm and benevolent feelings toward one's benefactors as it is about having obligations to them. Reciprocity, in its ordinary dictionary sense, is broader than that, and broader than all discussions that begin with a sense of mutuality and mutual benevolence. (See the reference below to Becker, Reciprocity, and the bibliographic essays therein.) Reciprocity pointedly covers arm's-length dealings between egoistic or mutually disinterested people. Moreover, norms of gratitude do not speak very directly about what feelings and obligations are appropriate toward wrongdoers, or the malicious. Reciprocity, by contrast, speaks directly to both sides of the equation – requiring responses in kind: positive for positive, negative for negative. In this, it also differs from the golden rule, which is compatible with forgiveness and "turning the other cheek" but has notorious difficulties as a basis for corrective justice, punishment, and dealing with people (e.g., masochists) who have unusual motivational structures. Finally, the idea of enforcing, or carrying out a duty of gratitude, as well as calibrating the extent one's gratitude, seems inconsistent with the warm and benevolent feelings of "being grateful." There is a similar inconsistency in the idea of enforcing a duty to love. Reciprocity, by contrast, because it does not necessarily involve having special feelings of love or benevolence, fits more comfortably into discussions of duties and obligations. Further, its requirement of an in-kind response invites us to calibrate both the quality and the quantity of the response. The norm of reciprocity thus requires that we make fitting and proportional responses to both the benefits and harms we receive – whether they come from people who have been benevolent or malicious. Working out the conceptual details of this idea presents interesting questions of its own. The following matters are all considered at length in many of the sources listed below under References, and those authors typically defend particular proposals about how best to define the conceptual details of reciprocity. What follows here is simply an outline of the topics that are under philosophical scrutiny. Qualitative similarity. What counts as making a qualitatively appropriate or "fitting" response in various settings—positive for positive, negative for negative? If one person invites another to dinner, must the other offer a dinner in return? How soon? Must it be directly to the original benefactor, or will providing a comparable favor to someone else be appropriate? If the dinner one receives is unintentionally awful, must one reciprocate with something similarly awful? Sometimes an immediate tit-for-tat response seems inappropriate, and at other times it is the only thing that will do. Are there general principles for assessing the qualitative appropriateness of reciprocal responses? Reflective people typically practice a highly nuanced version of the norm of reciprocity for social life, in which the qualitative similarity or fittingness of the response appears to be determined by a number of factors. The nature of the transaction. One is the general nature of the transaction or relationship between the parties – the rules and expectations involved in a particular interaction itself. Tit for tat, defined in a literal way as an exchange of the identical kinds of goods (client list for client list, referral for referral) may be the only sort of reciprocal response that is appropriate in a clearly defined business situation. Similarly, dinner-for-dinner may be the expectation among members of a round robin dinner club. But when the nature of the transaction is more loosely defined, or is embedded in a complex personal relationship, an appropriate reciprocal response often requires spontaneity, imagination, and even a lack of premeditation about where, what, and how soon. Fitting the response to the recipient. Another aspect of qualitative fit is what counts subjectively, for the recipient, as a response in-kind. When we respond to people who have benefited us, it seems perverse to give them things they do not regard as benefits. The general principle here is that, other things equal, a return of good for good received will require giving something that will actually be appreciated as good by the recipient – at least eventually. Similarly for the negative side. When we respond to bad things, reciprocity presumably requires a return that the recipient regards as a bad thing. Unusual circumstances. A third aspect of qualitative fit is the presence or absence of circumstances that undermine the usual expectations about reciprocity. If a pair of friends often borrow each other's household tools, and one of them (suddenly deranged with anger) asks to borrow an antique sword from the other's collection, what is a fitting response? The example, in a slightly different form, goes back to Plato. The point is that in this unusual circumstance, reciprocity (as well as other considerations) may require that the recipient not get what he wants at the moment. Rather, it may be that the recipient should be given what he needs, in some objective sense, whether he ever comes to appreciate that it is good for him. General rationale. A final determinant of qualitative fit is the general rationale for having the norm of reciprocity in the first place. For example, if the ultimate point of practicing reciprocity is to produce stable, productive, fair, and reliable social interactions, then there may be some tensions between things that accomplish this general goal and things that satisfy only the other three determinants. Responding to others' harmful conduct raises this issue. As Plato observed (Republic, Book I), it is not rational to harm our enemies in the sense of making them worse, as enemies or as people, than they already are. We may reply to Plato by insisting that reciprocity merely requires us to make them worse-off, not worse, period. But if it turns out that the version of the reciprocity norm we are using actually has the consequence of doing both, or at any rate not improving the situation, then we will have undermined the point of having it. Quantitative similarity. Another definitional issue concerns proportionality. What counts as too little, or too much in return for what we receive from others? In some cases, such as borrowing a sum of money from a friend who has roughly the same resources, a prompt and exact return of the same amount seems right. Less will be too little, and a return with interest will often be too much, between friends. But in other cases, especially in exchanges between people who are very unequal in resources, a literal reading of tit-for-tat may be a perverse rule – one that undermines the social and personal benefits of the norm of reciprocity itself. How, for example, may badly disadvantaged people reciprocate for the public or private assistance they receive? Requiring a prompt and exact return of the benefit received may defeat the general purpose of the norm of reciprocity by driving disadvantaged people further into debt. Yet to waive the debt altogether, or to require only some discounted amount seems to defeat the purpose also. Anglo-American legal theory and practice has examples of two options for dealing with this problem. One is to require a return that is equal to the benefit received, but to limit the use of that requirement in special cases. Bankruptcy rules are in part designed to prevent downward, irrecoverable spirals of debt while still exacting a considerable penalty. Similarly, there are rules for rescinding unconscionable contracts, preventing unjust enrichment, and dealing with cases in which contractual obligations have become impossible to perform. These rules typically have considerable transaction costs. Another kind of option is to define a reciprocal return with explicit reference to ability to pay. Progressive tax rates are an example of this. Considered in terms of reciprocity, this option seems based on an equal sacrifice interpretation of proportionality, rather than an equal benefit one. Under an equal sacrifice rule, making a quantitatively similar return will mean giving something back whose marginal value to oneself, given one's resources, equals the marginal value of the sacrifice made by the original giver, given her resources. Reciprocity and justice Standard usage of the term justice shows its close general connection to the concept of reciprocity. Justice includes the idea of fairness, and that in turn includes treating similar cases similarly, giving people what they deserve, and apportioning all other benefits and burdens in an equitable way. Those things, further, involve acting in a principled, impartial way that forbids playing favorites and may require sacrifices. All of those things are certainly in the neighborhood of the elements of reciprocity (e.g., fittingness, proportionality), but it is challenging to explain the precise connections. Discussions of merit, desert, blame, and punishment inevitably involve questions about the fittingness and proportionality of our responses to others, and retributive theories of punishment put the norm of reciprocity at their center. The idea is to make the punishment fit the crime. This differs from utilitarian theories of punishment, which may use fittingness and proportionality as constraints, but whose ultimate commitment is to make punishment serve social goals such as general deterrence, public safety, and the rehabilitation of wrongdoers. In just war theory, notions of fittingness and proportionality are central, at least as constraints both on the justification of a given war, and the methods used to prosecute it. When war represents a disproportionate response to a threat or an injury, it raises questions of justice related to reciprocity. When war fighting employs weapons that do not discriminate between combatants and noncombatants, it raises questions of justice related to reciprocity. A profound sense of injustice related to a lack of reciprocity – for example, between those privileged by socioeconomic status, political power, or wealth, and those who are less privileged, and oppressed – sometimes leads to war in the form of revolutionary or counterrevolutionary violence. It has been argued that the use of autonomous or remote controlled weaponized drones violate reciprocity. Political solutions which end the violence without dealing with the underlying injustice run the risk of continued social instability. A very deep and persistent line of philosophical discussion explores the way in which reciprocity can resolve conflicts between justice and self-interest, and can justify the imposition (or limitation) of social, political, and legal obligations that require individuals to sacrifice their own interests. This aspect of the philosophical discussion of reciprocity attempts to bring together two ways of approaching a very basic question: What is the fundamental justification for the existence of social and political institutions – institutions that impose and enforce duties and obligations upon their members? Individual well-being. One obvious answer is that people need to stay out of each other's way enough so that each can pursue his or her individual interests as far as possible, without interference from others. This immediately justifies rules that are mutually advantageous, but it raises questions about requiring obedience from people whenever it turns out that they will be disadvantaged by following the rules, or can get away with disobeying them. So the problem becomes one of showing whether, and when, it might actually be mutually advantageous to follow the rules of justice even when it is inconvenient or costly to do so. Social contract theorists often invoke the value of reciprocal relationships to deal with this. Many human beings need help from one another from time to time in order to pursue their individual interests effectively. So if we can arrange a system of reciprocity in which all the benefits we are required to contribute are typically returned to us in full (or more), that may justify playing by the rules—even in cases where it looks as though we can get away with not doing so. Social well-being. Another obvious answer to the question of why people organize themselves into groups, however, is in order to achieve levels of cooperation needed for improving society generally – for example by improving public health, and society-wide levels of education, wealth, or individual welfare. This also gives a reason for rules of justice, but again raises problems about requiring individuals to sacrifice their own welfare for the good of others—especially when some individuals might not share the particular goals for social improvements at issue. Here too, the value of reciprocal relationships can be invoked, this time to limit the legitimacy of the sacrifices a society might require. For one thing, it seems perverse to require sacrifices in pursuit of some social goal if it turns out those sacrifices are unnecessary, or in vain because the goal cannot be achieved. To some philosophers, a theory of justice based on reciprocity (or fairness, or fair play) is an attractive middle ground between a thoroughgoing concern with individual well-being and a thoroughgoing concern with social well-being. This has been part of the attraction of the most influential line of thought on distributive justice in recent Anglo-American philosophy – the one carried on in the context of John Rawls' work. Future generations. It may also be that there is something to be gained, philosophically, from considering what obligations of generalized reciprocity present generations of human beings may have towards future ones. Rawls considers (briefly)[where?] the problem of defining a "just savings principle" for future generations, and treats it as a consequence of the interests people typically have in the welfare of their descendants, and the agreements fully reciprocal members of society would come to among themselves about such matters. Others (e.g., Lawrence C. Becker) have explored the intuitive idea that acting on behalf of future generations may be required as a generalized form of reciprocity for benefits received from previous generations.[citation needed] Mutuality What is the relation between reciprocity and love, friendship or family relationships? If such relationships are ideally ones in which the parties are connected by mutual affection and benevolence, should not justice and reciprocity stay out of their way? Is impartiality consistent with love? Does not acting on principle take the affection out of friendship or family relationships? Does following the norm of reciprocity eliminate unconditional love or loyalty? Some contemporary philosophers have criticized major figures in the history of Western philosophy, including John Rawls' early work, for making familial relationships more or less opaque in theories of justice. (See the reference below to Okin.) The argument is that families can be grossly unjust, and have often been so. Since the family is "the school of justice", if it is unjust the moral education of children is distorted, and the injustice tends to spread to the society at large, and to be perpetuated in following generations. If that is right, then justice and reciprocity must define the boundaries within which we pursue even the most intimate relationships. A somewhat different thread on these matters begins with Aristotle's discussion of friendship, in Nicomachean Ethics 1155-1172a. He proposes that the highest or best form of friendship involves a relationship between equals – one in which a genuinely reciprocal relationship is possible. This thread appears throughout the history of Western ethics in discussions of personal and social relationships of many sorts: between children and parents, spouses, humans and other animals, and humans and god(s). The question is the extent to which the kind of reciprocity possible in various relationships determines the kind of mutual affection and benevolence possible in those relationships. This said, Nick Founder in "Finding True Friends" (2015) observes that reciprocation in personal relationships rarely follows a mathematical formula and the level of reciprocation, i.e. the give and take, will vary depending on the personalities involved, and situational factors such as which party has more control, persuasive power or influence. It is often the case that one party will typically be the lead reciprocator with the other being the responsive reciprocator. The form of reciprocation can also be influenced by the level of emotional need. Sometimes one party will need more support than the other and this can switch at different times depending on the life situation of each party. Because reciprocation is influenced by personal circumstances and since people do not follow a set pattern like robots, reciprocation from a friend to a friend for example will vary in intensity and an absolutely consistent pattern cannot be expected. If, for example, a person has a large inner circle of friendships with reciprocation as the key element of friendship, then the level of reciprocation within the inner circle will influence the depth of a friendship therein. Reciprocation can be responsive or initiative. It is also a fundamental principle in parenting, a successful work place, religion and karma. So for example, in the friendship context, reciprocation means to give or take mutually but not necessarily equally. Overall reciprocal balance is more important than strict equality at every moment. Friendship based on reciprocity means caring for each other, being responsive and supportive and in tune with each other. But without some form of overall reciprocal balance, the relationship may become transformed into a nonreciprocal form of friendship, or the friendship may fail altogether. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-362] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Parameter_(computer_programming)] | [TOKENS: 3517] |
Contents Parameter (computer programming) In computer programming, a parameter, a.k.a. formal argument, is a variable that represents an argument, a.k.a. actual argument, a.k.a. actual parameter, to a function call. A function's signature defines its parameters. A call invocation involves evaluating each argument expression of a call and associating the result with the corresponding parameter. For example, consider the Python function Variables x and y are parameters, each of type int. For call add(2, 3), the expressions 2 and 3 are arguments. For call add(a+1, b+2), the arguments are a+1 and b+2. Parameter passing is defined by a programming language. Evaluation strategy defines the semantics for how parameters can be declared and how arguments are passed to a function. Generally, with call by value, a parameter acts like a new, local variable initialized to the value of the argument. If the argument is a variable, the function cannot modify the argument state because the parameter is a copy. With call by reference, which requires the argument to be a variable, the parameter is an alias of the argument. Example The following C source code defines a function named salesTax with one parameter named price; both the function and parameter are typed double. For call salesTax(10.00), the argument 10.00 is passed to the function as the double value 10 and assigned to parameter variable price, and the function returns 0.5. Parameters and arguments The terms parameter and argument may have different meanings in different programming languages. Sometimes they are used interchangeably, and the context is used to distinguish the meaning. The term parameter (sometimes called formal parameter) is often used to refer to the variable as found in the function declaration, while argument (sometimes called actual parameter) refers to the actual input supplied at a function call statement. For example, if one defines a function as def f(x): ..., then x is the parameter, and if it is called by a = ...; f(a) then a is the argument. A parameter is an (unbound) variable, while the argument can be a literal or variable or more complex expression involving literals and variables. In case of call by value, what is passed to the function is the value of the argument – for example, f(2) and a = 2; f(a) are equivalent calls – while in call by reference, with a variable as argument, what is passed is a reference to that variable - even though the syntax for the function call could stay the same. The specification for pass-by-reference or pass-by-value would be made in the function declaration and/or definition. Parameters appear in procedure definitions; arguments appear in procedure calls. In the function definition f(x) = x*x the variable x is a parameter; in the function call f(2) the value 2 is the argument of the function. Loosely, a parameter is a type, and an argument is an instance. A parameter is an intrinsic property of the procedure, included in its definition. For example, in many languages, a procedure to add two supplied integers together and calculate the sum would need two parameters, one for each integer. In general, a procedure may be defined with any number of parameters, or no parameters at all. If a procedure has parameters, the part of its definition that specifies the parameters is called its parameter list. By contrast, the arguments are the expressions supplied to the procedure when it is called, usually one expression matching one of the parameters. Unlike the parameters, which form an unchanging part of the procedure's definition, the arguments may vary from call to call. Each time a procedure is called, the part of the procedure call that specifies the arguments is called the argument list. Although parameters are also commonly referred to as arguments, arguments are sometimes thought of as the actual values or references assigned to the parameter variables when the function is called at run-time. When discussing code that is calling into a function, any values or references passed into the function are the arguments, and the place in the code where these values or references are given is the parameter list. When discussing the code inside the function definition, the variables in the function's parameter list are the parameters, while the values of the parameters at runtime are the arguments. Consider the following C function, sum, which has two parameters, addend1 and addend2. It adds the values passed into the parameters, and returns the result to the function's caller. The following is an example of calling sum. The variables value1 and value2 are initialized and then passed to Sum as the arguments. At runtime, the values assigned to these variables are passed to sum. In sum, the parameters addend1 and addend2 are evaluated, yielding the arguments 40 and 2, respectively. The values of the arguments are added, and the result is returned to the caller, where it is assigned to the variable sum_value. Because of the difference between parameters and arguments, it is possible to supply inappropriate arguments to a procedure. The call may supply too many or too few arguments, one or more of the arguments may be a wrong type, or arguments may be supplied in the wrong order. Any of these situations causes a mismatch between the parameter and argument lists, and the procedure will often return an unintended answer or generate a runtime error. Within the Eiffel software development method and language, the terms argument and parameter have distinct uses established by convention. The term argument is used exclusively in reference to a routine's inputs, and the term parameter is used exclusively in type parameterization for generic classes. Consider the following routine definition: The routine sum takes two arguments addend1 and addend2, which are called the routine's formal arguments. A call to sum specifies actual arguments, as shown below with value1 and value2. Parameters are also thought of as either formal or actual. Formal generic parameters are used in the definition of generic classes. In the example below, the class HASH_TABLE is declared as a generic class which has two formal generic parameters, G representing data of interest and K representing the hash key for the data: When a class becomes a client to HASH_TABLE, the formal generic parameters are substituted with actual generic parameters in a generic derivation. In the following attribute declaration, my_dictionary is to be used as a character string based dictionary. As such, both data and key formal generic parameters are substituted with actual generic parameters of type STRING. Datatypes In strongly typed programming languages, each parameter's type must be specified in the procedure declaration. Languages using type inference attempt to discover the types automatically from the function's body and usage. Dynamically typed programming languages defer type resolution until run-time. Weakly typed languages perform little to no type resolution, relying instead on the programmer for correctness. Some languages use a special keyword (e.g. void) to indicate that the function has no parameters; in formal type theory, such functions take an empty parameter list (whose type is not void, but rather unit). Argument passing The mechanism for assigning arguments to parameters, called argument passing, depends upon the evaluation strategy used for that parameter (typically call by value), which may be specified using keywords. Some programming languages such as Ada, C++, Clojure,[citation needed] Common Lisp, Fortran 90, Python, Ruby, Tcl, and Windows PowerShell[citation needed] allow for a default argument to be explicitly or implicitly given in a function's declaration. This allows the caller to omit that argument when calling the function. If the default argument is explicitly given, then that value is used if it is not provided by the caller. If the default argument is implicit (sometimes by using a keyword such as Optional) then the language provides a well-known value (such as null, Empty, zero, an empty string, etc.) if a value is not provided by the caller. PowerShell example: Default arguments can be seen as a special case of the variable-length argument list. Some languages allow functions to be defined to accept a variable number of arguments. For such languages, the functions must iterate through the list of arguments. PowerShell example: Some programming languages—such as Ada and Windows PowerShell—allow functions to have named parameters. This allows the calling code to be more self-documenting. It also provides more flexibility to the caller, often allowing the order of the arguments to be changed, or for arguments to be omitted as needed. PowerShell example: In lambda calculus, each function has exactly one parameter. What is thought of as functions with multiple parameters is usually represented in lambda calculus as a function which takes the first argument, and returns a function which takes the rest of the arguments; this is a transformation known as currying. Some programming languages, like ML and Haskell, follow this scheme. In these languages, every function has exactly one parameter, and what may look like the definition of a function of multiple parameters, is actually syntactic sugar for the definition of a function that returns a function, etc. Function application is left-associative in these languages as well as in lambda calculus, so what looks like an application of a function to multiple arguments is correctly evaluated as the function applied to the first argument, then the resulting function applied to the second argument, etc. Output parameters An output parameter, also known as an out parameter or return parameter, is a parameter used for output, rather than the more usual use for input. Using call by reference parameters, or call by value parameters where the value is a reference, as output parameters is an idiom in some languages, notably C and C++,[a] while other languages have built-in support for output parameters. Languages with built-in support for output parameters include Ada (see Ada subprograms), Fortran (since Fortran 90; see Fortran "intent"), various procedural extensions to SQL, such as PL/SQL (see PL/SQL functions) and Transact-SQL, C# and the .NET Framework, Swift, and the scripting language TScript (see TScript function declarations). Here is an example of an "output parameter" in C: The function will return nothing, but the value of x + y will be assigned to the variable whose address is passed in as out. More precisely, one may distinguish three types of parameters or parameter modes: input parameters, output parameters, and input/output parameters; these are often denoted in, out, and in out or inout. An input argument (the argument to an input parameter) must be a value, such as an initialized variable or literal, and must not be redefined or assigned to. An output argument must be an assignable variable, but it need not be initialized, any existing value is not accessible, and must be assigned a value. An input/output argument must be an initialized, assignable variable, and can optionally be assigned a value. The exact requirements and enforcement vary between languages – for example, in Ada 83 output parameters can only be assigned to, not read, even after assignment (this was removed in Ada 95 to remove the need for an auxiliary accumulator variable). These are analogous to the notion of a value in an expression being an r-value (has a value), an l-value (can be assigned), or an r-value/l-value (has a value and can be assigned), respectively, though these terms have specialized meanings in C. In some cases only input and input/output are distinguished, with output being considered a specific use of input/output, and in other cases only input and output (but not input/output) are supported. The default mode varies between languages: in Fortran 90 input/output is default, while in C# and SQL extensions input is default, and in TScript each parameter is explicitly specified as input or output. Syntactically, parameter mode is generally indicated with a keyword in the function declaration, such as void f(out int x) in C#. Conventionally output parameters are often put at the end of the parameter list to clearly distinguish them, though this is not always followed. TScript uses a different approach, where in the function declaration input parameters are listed, then output parameters, separated by a colon (:) and there is no return type to the function itself, as in this function, which computes the size of a text fragment: Parameter modes are a form of denotational semantics, stating the programmer's intent and allowing compilers to catch errors and apply optimizations – they do not necessarily imply operational semantics (how the parameter passing actually occurs). Notably, while input parameters can be implemented by call by value, and output and input/output parameters by call by reference – and this is a straightforward way to implement these modes in languages without built-in support – this is not always how they are implemented. This distinction is discussed in detail in the Ada '83 Rationale, which emphasizes that the parameter mode is abstracted from which parameter passing mechanism (by reference or by copy) is actually implemented. For instance, while in C# input parameters (default, no keyword) are passed by value, and output and input/output parameters (out and ref) are passed by reference, in PL/SQL input parameters (IN) are passed by reference, and output and input/output parameters (OUT and IN OUT) are by default passed by value and the result copied back, but can be passed by reference by using the NOCOPY compiler hint. A syntactically similar construction to output parameters is to assign the return value to a variable with the same name as the function. This is found in Pascal and Fortran 66 and Fortran 77, as in this Pascal example: This is semantically different in that when called, the function is simply evaluated – it is not passed a variable from the calling scope to store the output in. The primary use of output parameters is to return multiple values from a function, while the use of input/output parameters is to modify state using parameter passing (rather than by shared environment, as in global variables). An important use of returning multiple values is to solve the semipredicate problem of returning both a value and an error status – see Semipredicate problem: Multivalued return. For example, to return two variables from a function in C, one may write: where x is an input parameter and width and height are output parameters. A common use case in C and related languages is for exception handling, where a function places the return value in an output variable, and returns a Boolean corresponding to whether the function succeeded or not. An archetypal example is the TryParse method in .NET, especially C#, which parses a string into an integer, returning true on success and false on failure. This has the following signature: and may be used as follows: Similar considerations apply to returning a value of one of several possible types, where the return value can specify the type and then value is stored in one of several output variables. Output parameters are often discouraged in modern programming, essentially as being awkward, confusing, and too low-level – commonplace return values are considerably easier to understand and work with. Notably, output parameters involve functions with side effects (modifying the output parameter) and are semantically similar to references, which are more confusing than pure functions and values, and the distinction between output parameters and input/output parameters can be subtle. Further, since in common programming styles most parameters are simply input parameters, output parameters and input/output parameters are unusual and hence susceptible to misunderstanding. Output and input/output parameters prevent function composition, since the output is stored in variables, rather than in the value of an expression. Thus one must initially declare a variable, and then each step of a chain of functions must be a separate statement. For example, in C++ the following function composition: when written with output and input/output parameters instead becomes (for F it is an output parameter, for G an input/output parameter): In the special case of a function with a single output or input/output parameter and no return value, function composition is possible if the output or input/output parameter (or in C/C++, its address) is also returned by the function, in which case the above becomes: There are various alternatives to the use cases of output parameters. For returning multiple values from a function, an alternative is to return a tuple. Syntactically this is clearer if automatic sequence unpacking and parallel assignment can be used, as in Go or Python, such as: For returning a value of one of several types, a tagged union can be used instead; the most common cases are nullable types (option types), where the return value can be null to indicate failure. For exception handling, one can return a nullable type, or raise an exception. For example, in Python one might have either: or, more idiomatically: The micro-optimization of not requiring a local variable and copying the return when using output variables can also be applied to conventional functions and return values by sufficiently sophisticated compilers. The usual alternative to output parameters in C and related languages is to return a single data structure containing all return values. For example, given a structure encapsulating width and height, one can write: In object-oriented languages, instead of using input/output parameters, one can often use call by sharing, passing a reference to an object and then mutating the object, though not changing which object the variable refers to. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ministry_of_Defence_(United_Kingdom)] | [TOKENS: 2672] |
Contents Ministry of Defence (United Kingdom) The Ministry of Defence (MOD or MoD) is a ministerial department of the Government of the United Kingdom. It is responsible for implementing the defence policy set by the government and serves as the headquarters of the British Armed Forces. Officially, its principal objectives are to defend the United Kingdom of Great Britain and Northern Ireland and its interests and to strengthen international peace and stability. The MOD also manages day-to-day running of the armed forces, contingency planning and defence procurement. The expenditure, administration and policy of the MOD are scrutinised by the Defence Select Committee, except for Defence Intelligence which instead falls under the Intelligence and Security Committee of Parliament. The Ministry of Defence has been involved in commercial activities; an example of which is their 2025 announcement to supply Norway with warships, the UK's largest warship export deal by value. History During the 1920s and 1930s, British civil servants and politicians, looking back at the performance of the state during the First World War, concluded that there was a need for greater co-ordination between the three services that made up the armed forces of the United Kingdom: the Royal Navy, the British Army and the Royal Air Force. The formation of a united ministry of defence was rejected by the coalition government of David Lloyd George in 1921, but the Chiefs of Staff Committee was formed in 1923, for the purposes of inter-service co-ordination. As rearmament became a concern during the 1930s, Stanley Baldwin created the position of Minister for Co-ordination of Defence. Ernle Chatfield, 1st Baron Chatfield held the post until the fall of the Chamberlain government in 1940. His success was limited by his lack of control over the existing Service departments, and his lack of political influence. On forming his government in 1940, Winston Churchill created the office of Minister of Defence, to exercise ministerial control over the Chiefs of Staff Committee and to co-ordinate defence matters. The post was held by the Prime Minister of the day until Clement Attlee's government introduced the Ministry of Defence Act of 1946. After 1946, the three posts of Secretary of State for War, First Lord of the Admiralty, and Secretary of State for Air were formally subordinated to the new Minister of Defence, who had a seat in the Cabinet. The three service ministers – Admiralty, War, Air – remained in direct operational control of their respective services, but ceased to attend Cabinet. From 1946 to 1964, five Departments of State did the work of the modern Ministry of Defence: the Admiralty, the War Office, the Air Ministry, the Ministry of Aviation, and an earlier form of the Ministry of Defence. The Ministry of Supply existed from 1939 to 1959. Those departments merged in 1964, and the defence functions of the Ministry of Aviation Supply were merged into the Ministry of Defence in 1971. Thereafter the MoD Procurement Executive was established as a separate organisation to supervise all military procurement. The unification of all defence activities under a single ministry was motivated by a desire to curb interservice rivalries and followed the precedent set by the American National Security Act of 1947. The most notable fraud conviction has been that of Gordon Foxley, Director of Ammunition Procurement at the Ministry of Defence from 1981 to 1984. Police claimed he received at least £3.5m in total in corrupt payments, such as substantial bribes from overseas arms contractors aiming to influence the allocation of contracts. A government report covered by The Guardian newspaper in 2002 indicated that between 1940 and 1979, the Ministry of Defence "turned large parts of the country into a giant laboratory to conduct a series of secret germ warfare tests on the public" and many of these tests "involved releasing potentially dangerous chemicals and micro-organisms over vast swathes of the population without the public being told." The Ministry of Defence claims that these trials were to simulate germ warfare and that the tests were harmless. However, families who have been in the area of many of the tests are experiencing children with birth defects and physical and mental handicaps and many are asking for a public inquiry. The report estimated these tests affected millions of people, including during one period between 1961 and 1968 where "more than a million people along the south coast of England, from Torquay to the New Forest, were exposed to bacteria including E.coli and Bacillus globigii, which mimics anthrax." Two scientists commissioned by the Ministry of Defence stated that these trials posed no risk to the public. This was confirmed by Sue Ellison, a representative of the Defence Science and Technology Laboratory at Porton Down who said that the results from these trials "will save lives, should the country or our forces face an attack by chemical and biological weapons." In February 2019, former soldier Inoke Momonakaya won £458,000 payout after a legal battle for the racial harassment and bullying he received while serving in the army. In August 2019, A Commons Defence Select Committee report revealed that several female and BAME military staff have raised concerns regarding discrimination, bullying and harassment. In September 2019, two former British army soldiers Nkululeko Zulu and Hani Gue won a racial discrimination claim against the Ministry of Defence (MoD). In November 2019, mixed race soldier Mark De Kretser sued MoD for £100k claiming he was subjected to "grindingly repetitive" racist taunts from colleagues. In October 2009, the MOD was heavily criticised for withdrawing the bi-annual non-operational training £20m budget for the Territorial Army (TA), ending all non-operational training for six months until April 2010. The government eventually backed down and restored the funding. The TA provides a small percentage of the UK's operational troops. Its members train on weekly evenings and monthly weekends, as well as two-week exercises generally annually and occasionally bi-annually for troops doing other courses. The cuts would have meant a significant loss of personnel and would have had adverse effects on recruitment. In 2013, it was found that the Ministry of Defence had overspent on its equipment budget by £6.5bn on orders that could take up to 39 years to fulfil. The Ministry of Defence has been criticised in the past for poor management and financial control. Specific examples of overspending include: Following the 2025 Strategic Defence Review, which suggested increased spending and large changes to the armed forces particularly toward autonomous weapons, the Defence Investment Plan, expected in autumn 2025, was delayed amid warnings that there was a £28 billion funding gap over the next four years. In May 2024, the ministry's payroll system was reportedly targeted multiple times in a cyberattack in which personnel and their bank details were compromised. While initial reports attributed the cyberattack to China, the Minister of Defence Grant Shapps said it would take some time to conclude who was to blame. Ministerial team The Ministers in the Ministry of Defence are as follows, with cabinet ministers in bold: Senior military officials The Chief of the Defence Staff (CDS) is the professional head of the British Armed Forces and the most senior uniformed military adviser to the Secretary of State for Defence and the Prime Minister. The CDS is supported by the Vice Chief of the Defence Staff (VCDS) who deputises and is responsible for the day-to-day running of the armed services aspect of the MOD through the Central Staff, working closely alongside the Permanent Secretary. They are joined by the professional heads of the three British armed services (Royal Navy, British Army and Royal Air Force) and the Commander of Strategic Command. All personnel sit at OF-9 rank in the NATO rank system. Together the Chiefs of Staff form the Chiefs of Staff Committee with responsibility for providing advice on operational military matters and the preparation and conduct of military operations. The current Chiefs of Staff are as follows. The Chief of Defence Staff is supported by several Deputy Chiefs of the Defence Staff and senior officers at OF-8 rank. Additionally, there are a number of Assistant Chiefs of Defence Staff, including the Defence Services Secretary in the Royal Household of the Sovereign of the United Kingdom, who is also the Assistant Chief of Defence Staff (Personnel). Senior management Permanent Secretary and other senior officials The Ministers and Chiefs of the Defence Staff are supported by several civilian, scientific and professional military advisors. The Permanent Under-Secretary of State for Defence (generally known as the Permanent Secretary) is the senior civil servant at the MOD. Their role is to ensure that it operates effectively as a government department and has responsibility for the strategy, performance, reform, organisation and the finances of the MOD. The role works closely with the Chief of the Defence Staff in leading the organisation and supporting Ministers in the conduct of business in the department across the full range of responsibilities. Defence policy The Strategic Defence and Security Review 2015 included £178 billion investment in new equipment and capabilities. The review set a defence policy with four primary missions for the Armed Forces: The review stated the Armed Forces will also contribute to the government's response to crises by being prepared to: Governance and departmental organisation Defence is governed and managed by several committees. The following organisational groups come under the control of the MOD. Top level budgets The MOD comprises four top-level budgets. These are: Executive agencies Executive non-departmental public bodies Advisory non-departmental public bodies Ad-hoc advisory group Other bodies Public corporations Enabling organisation In addition, the MOD is responsible for the administration of the Sovereign Base Areas of Akrotiri and Dhekelia in Cyprus. Competitive procurement processes are used whenever possible, and all new direct tender and contract opportunities valued over £10,000 are advertised on a system called the Defence Sourcing Portal. A separate internal policy generally operates in respect of low value purchasing below this threshold. DEFCON contract conditions are numbered defence contract conditions are in contracts issued by the MOD (not to be confused with DEFCON as used by the United States Armed Forces, which refers to a level of military "defence readiness condition"). Examples include: A full set of the DEFCONs can be accessed via the MoD's Defence Gateway (registration required). The government noted in 2013 that the MoD's third-party expenditure was characterised by "complex, high-value contracts". Defence purchasing contributes to government ambitions to make supply chains more accessible to small and medium-sized enterprises, but the government commented that it had yet to secure good insight into the supply chain role of SMEs. The National Defence Industries Council is a UK body through which the Ministry is able to consult strategically with its principal defence suppliers and, through the Council's sub-groups, to consult with bodies in specific industrial sectors. Membership consists of a range of companies and appointments are made "at the discretion of the Secretary of State". Property portfolio The Ministry of Defence is one of the United Kingdom's largest landowners, owning 227,300 hectares of land and foreshore (either freehold or leasehold) at April 2014, which was valued at "about £20 billion". The MOD also has "rights of access" to a further 222,000 hectares. In total, this is about 1.8% of the UK land mass. The total annual cost to support the defence estate is "in excess of £3.3 billion". The defence estate is divided as training areas & ranges (84.0%), research & development (5.4%), airfields (3.4%), barracks & camps (2.5%), storage & supply depots (1.6%), and other (3.0%). These are largely managed by the Defence Infrastructure Organisation. The headquarters of the MOD are in Whitehall and is known as MOD Main Building. This structure is neoclassical in style and was originally built between 1938 and 1959 to designs by Vincent Harris to house the Air Ministry and the Board of Trade. A major refurbishment of the building was completed under a Private Finance Initiative contract by Skanska in 2004. The northern entrance in Horse Guards Avenue is flanked by two monumental statues, Earth and Water, by Charles Wheeler. Opposite stands the Gurkha Monument, sculpted by Philip Jackson and unveiled in 1997 by Queen Elizabeth II. Within it is the Victoria Cross and George Cross Memorial, and nearby are memorials to the Fleet Air Arm and RAF (to its east, facing the riverside). Henry VIII's wine cellar at the Palace of Whitehall, built in 1514–1516 for Cardinal Wolsey, is in the basement of Main Building, and is used for entertainment. The entire vaulted brick structure of the cellar was encased in steel and concrete and relocated nine feet to the west and nearly 19 feet (5.8 m) deeper in 1949, when construction was resumed at the site after World War II. This was carried out without any significant damage to the structure. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Roast_(comedy)] | [TOKENS: 2221] |
Contents Roast (comedy) A roast is a form of insult comedy, originating in American humor, in which a specific individual, a guest of honor, is subjected to jokes at their expense, as well as genuine praise and tributes. The assumption is that the roastee can take the jokes in good humor and not as serious criticism or insult. The individual is surrounded by friends, fans, and well-wishers, who can receive some of the same treatment during the evening. The host of the event is called the roastmaster, since it rhymes with and plays on toastmaster. Anyone mocked in such a way is said to have been roasted. There is a parallel tradition in some countries in which the host of formal events, such as award ceremonies and annual dinners, is expected to good-naturedly mock the event's attendees. In some cases, this has caused controversy when the host is seen as being too insulting. There is also a concept of roasting on internet social media, where a person asks others to mock them, usually by putting up a photo of themselves. Though the mockery is solicited, this activity, too, has caused controversy, with some considering it a form of cyberbullying. Even more controversial is the practice of simply insulting others for comedic effect, which some have referred to as "roasting", though comedians have stressed that a true roast requires the consent of the target. History The tradition has its roots in the raucous gatherings of the New York Friars Club in the early 20th century. These gatherings were private events where members could express themselves freely, often poking fun at each other.[citation needed] In 1949, the New York Friars Club held its first roast,[clarification needed] with French singer Maurice Chevalier as the guest of honor. The format gained public popularity with The Dean Martin Celebrity Roast specials in the 1970s, televised events that brought the concept into American living rooms. Here, celebrities were humorously insulted, praised, and tributed by colleagues and comedians, establishing the roast as a form of high-profile entertainment that celebrated the careers and personalities of public figures.[citation needed] Roasts have since evolved, with Comedy Central further popularizing the format in the 2000s with its series of celebrity roasts. These events maintained the tradition's spirit, combining affectionate tribute with biting humor, and often featured a dais of comedians and celebrities who took turns roasting the honoree and each other.[citation needed] Televised roasts in the United States The final few seasons of the television show Kraft Music Hall, from 1968 to 1971, included broadcasts of the Friars Club Roast; the celebrities roasted included Johnny Carson, Milton Berle, Jack Benny, Don Rickles, and Jerry Lewis. Dean Martin hosted a series of roasts on television in 1974 as part of the final season of his self-titled variety show. After the show was cancelled, NBC decided to schedule a series of The Dean Martin Celebrity Roast specials from the former MGM Grand Hotel and Casino (now Horseshoe Las Vegas) in the Ziegfeld Room; these were recorded and aired approximately once every two months from late 1974 to early 1979, and another three were produced in 1984.[citation needed] From 1998 to 2002, the cable channel Comedy Central produced and broadcast the annual roast of the New York Friars Club, featuring celebrities such as Drew Carey, Jerry Stiller, Rob Reiner, Hugh Hefner, and Chevy Chase.[citation needed] Based on the success of these roasts, Comedy Central began hosting their own roasts on a roughly annual basis, under the name Comedy Central Roast. The first roastee was Denis Leary in 2003, followed by Jeff Foxworthy, Pamela Anderson, William Shatner, Flavor Flav, Bob Saget, Larry the Cable Guy, Joan Rivers, David Hasselhoff, Donald Trump, Charlie Sheen, Roseanne Barr, James Franco, Justin Bieber, Rob Lowe, Bruce Willis, and Alec Baldwin.[citation needed] Comedian Jeff Ross gained fame through his participation in the televised Comedy Central roasts, and is frequently referred to as the "Roastmaster General", a position he in fact held with the New York Friars Club.[citation needed] In 2010, Comedy Central's international affiliates began to produce and air their own local roasts as well. Comedy Central New Zealand has aired roasts of Mike King and Murray Mexted; Comedy Central Africa has aired roasts of Steve Hofmeyr, Kenny Kunene, Somizi Mhlongo, AKA, and Khanyi Mbau; Comedy Central Latin America has aired a roast of Héctor Suárez; Comedy Central Spain has aired roasts of Santiago Segura, El Gran Wyoming, and José Mota; and Comedy Central Netherlands has aired roasts of Gordon (which was the highest-watched broadcast in the history of the channel), Giel Beelen, Johnny de Mol, Ali B, and Hans Klok.[citation needed] Other televised roasts in the United States The fourth (and final) episode of The Richard Pryor Show in 1977 was a roast of host Richard Pryor.[citation needed] Playboy produced one roast in 1986 of Tommy Chong that aired on the Playboy Channel.[citation needed] Basketball player Shaquille O'Neal produced two editions of his Shaq's All Star Comedy Roast: of himself in 2002 and of Emmitt Smith in 2003.[citation needed] In 2003, the cable channel MTV produced a roast of Carson Daly, which was billed as the MTV Bash. The cable channel TBS produced a roast in 2008 of Cheech & Chong, which was billed as Cheech & Chong: Roasted. The cable channel A&E also produced a roast in 2008, which was of Gene Simmons.[citation needed] The magazine Guitar World organized three "Rock & Roll Roasts" from 2012 to 2014. They were of musicians Zakk Wylde, Dee Snider, and Corey Taylor.[citation needed] A Friars Club roast of Terry Bradshaw was aired on ESPN2 in 2015. The cable channel Fusion aired a roast of Snoop Dogg in 2016, billed as the Snoop Dogg Smokeout. RuPaul's Drag Race has aired five roast-themed episodes: roasts of RuPaul in both season 5 (2013) and in RuPaul's Secret Celebrity Drag Race (2020), a roast of Michelle Visage in season 9 (2017), a mock-funeral roast of Lady Bunny in RuPaul's Drag Race All Stars season 4 (2019), and a Nice Girls Roast in season 13.[citation needed] The cable channel TNT aired a roast of the anchors of the TNT show Inside the NBA in 2020.[citation needed] In 2024, Netflix released a roast of Tom Brady. Outside the United States Some attempts have been made to adapt the American roast format to a British audience. Channel 4 launched the latest British version on April 7, 2010 with A Comedy Roast, with initial victims being Bruce Forsyth, Sharon Osbourne, and Chris Tarrant. Davina McCall and Barbara Windsor were other victims.[citation needed] The television series Roast Battle ran for four series from 2018 to 2020 on the British channel Comedy Central. It was an adaptation of the American series Jeff Ross Presents Roast Battle.[citation needed] The Indian comedy group All India Bakchod organized the live show AIB Knockout in January 2015 featuring Arjun Kapoor and Ranveer Singh with Karan Johar as the roastmaster. The programme caused a controversy for allegedly featuring distasteful, sexist, offending, and humiliating content. Videos of the event were removed from YouTube. Comedy Nights Bachao by Optimystix Productions. Artists and producers working for Shanghai Xiao Guo Culture Co. Ltd., started importing foreign stand-up comedy formats since 2012. Roast!, a Chinese version of Comedy Central Roasts, has reached 2.33 billion hits on Tencent's video streaming platform, according to Maoyan, a movie and TV site. Roast! differs in that, instead of a single annual special, it consists of annual seasons of 10 shows with a different celebrity victim – typically singers or actors – each week (season one contains 11 including a triple-length Chinese New Year special). A spin-off web show, Rock & Roast, has also become a hit in China, with 70 million viewers in its 2019 season, a steady increase from 50 million in its prior season.[citation needed] Fictional roasts Roasts have sometimes been portrayed in fictional TV shows. In other cases, standalone roasts have been produced of historical characters, with both roastees and roasters played by actors.[citation needed] The Dean Martin Celebrity Roast aired one fictional roast, of George Washington (played by Jan Leighton), on March 15, 1974.[citation needed] Part 2 of the 1979 TV special Legends of the Superheroes was a roast of various DC Comics superhero characters, hosted by Ed McMahon. The 1997 episode "The Roast" of the series The Larry Sanders Show revolved around a roast of the title character (Garry Shandling). The main plot of the 2013 episode "Correspondents' Lunch" of the NBC sitcom Parks and Recreation involves protagonist Leslie Knope (Amy Poehler) roasting the media of the fictional town of Pawnee in a local correspondents' lunch. In the 2009 episode "Stress Relief" of The Office, main character Michael Scott (Steve Carell) organizes a roast of himself. The 2019 Netflix series Historical Roasts, hosted by Jeff Ross, featured roasts of historical figures Abraham Lincoln (played by Bob Saget), Freddie Mercury (James Adomian), Anne Frank (Rachel Feinstein), Martin Luther King Jr. (Jerry Minor), Cleopatra (Ayden Mayeri), and Muhammad Ali (Jaleel White). In United States politics During presidential election years in the U.S., it is customary for both major party candidates to attend the Alfred E. Smith Memorial Foundation Dinner, typically engaging in a roast of each other, and occasionally themselves.[citation needed] The White House Correspondents' Association and Radio and Television Correspondents' Association have annual dinners that, in some years, feature a comedy roasting of the U.S. President. Don Imus at the RTCA in 1996, Stephen Colbert in 2006, and Michelle Wolf in 2018 have received particular attention for their biting remarks during their speeches. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Google_Scholar] | [TOKENS: 2495] |
Contents Google Scholar Google Scholar is a freely accessible web search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines. Released in beta in November 2004, the Google Scholar index includes peer-reviewed online academic journals and books, conference papers, theses and dissertations, preprints, abstracts, technical reports, and other scholarly literature, including court opinions and patents. Google Scholar uses a web crawler, or web robot, to identify files for inclusion in the search results. For content to be indexed in Google Scholar, it must meet certain specified criteria. An earlier statistical estimate published in PLOS One using a mark and recapture method estimated approximately 79–90% coverage of all articles published in English with an estimate of 100 million. This estimate also determined how many online documents were available. Google Scholar has been criticized for not vetting journals and for including predatory journals in its index. The University of Michigan Library and other libraries whose collections Google scanned for Google Books and Google Scholar retained copies of the scans and have used them to create the HathiTrust Digital Library. History Google Scholar arose out of a discussion between Alex Verstak and Anurag Acharya, both of whom were then working on building Google's main web index. Their goal was to "make the world's problem solvers 10% more efficient" by allowing easier and more accurate access to scientific knowledge. This goal is reflected in the Google Scholar's advertising slogan "Stand on the shoulders of giants", which was taken from an idea attributed to Bernard of Chartres, quoted by Isaac Newton, and is a nod to the scholars who have contributed to their fields over the centuries, providing the foundation for new intellectual achievements. One of the sources for the texts in Google Scholar is the University of Michigan's print collection. Scholars have gained a range of features over time. In 2006, a citation importing feature was implemented supporting bibliography managers, such as RefWorks, RefMan, EndNote, and BibTeX. In 2007, Acharya announced that Google Scholar had started a program to digitize and host journal articles in agreement with their publishers, an effort separate from Google Books, whose scans of older journals do not include the metadata required for identifying specific articles in specific issues. In 2011, Google removed Scholar from the toolbars on its search pages, making it both less easily accessible and less discoverable for users not already aware of its existence. Around this period, sites with similar features such as CiteSeer, Scirus, and Microsoft Windows Live Academic search were developed. Some of these are now defunct; in 2016, Microsoft launched a new competitor, Microsoft Academic. A major enhancement was rolled out in 2012, with the possibility for individual scholars to create personal "Scholar Citations profiles". A feature introduced in November 2013 allows logged-in users to save search results into the "Google Scholar library", a personal collection which the user can search separately and organize by tags. Via the "metrics" button, it reveals the top journals in a field of interest, and the articles generating these journal's impact can also be accessed. A metrics feature now supports viewing the impact of whole fields of science and academic journals. Google also included profiles for some posthumous academics, including Albert Einstein and Richard Feynman. For several years, the profile for Isaac Newton indicated he was as a "professor at MIT", with a "verified email at mit.edu". Features and specifications Google Scholar allows users to search for digital or physical copies of articles, whether online or in libraries. It indexes "full-text journal articles, technical reports, preprints, theses, books, and other documents, including selected Web pages that are deemed to be 'scholarly.'" Because many of Google Scholar's search results link to commercial journal articles, most people will be able to access only an abstract and the citation details of an article, and have to pay a fee to access the entire article. The most relevant results for the searched keywords will be listed first, in order of the author's ranking, the number of references that are linked to it and their relevance to other scholarly literature, and the ranking of the publication that the article appears in. Using its "group of" feature, it shows the available links to journal articles. In the 2005 version, this feature provided a link to both subscription-access versions of an article and to free full-text versions of articles; for most of 2006, it provided links to only the publishers' versions. Since December 2006, it has provided links to both published versions and major open access repositories, including all those posted on individual faculty web pages and other unstructured sources identified by similarity. On the other hand, Google Scholar does not allow to filter explicitly between toll access and open access resources, a feature offered Unpaywall and the tools which embed its data, such as Web of Science, Scopus and Unpaywall Journals, used by libraries to calculate the real costs and value of their collections. Through its "cited by" feature, Google Scholar provides access to abstracts of articles that have cited the article being viewed. It is this feature in particular that provides the citation indexing previously only found in CiteSeer, Scopus, and Web of Science. Google Scholar also provides links so that citations can be either copied in various formats or imported into user-chosen reference managers such as Zotero. "Scholar Citations profiles" are public author profiles that are editable by authors themselves. Individuals, logging on through a Google account with a bona fide address usually linked to an academic institution, can now create their own page giving their fields of interest and citations. Google Scholar automatically calculates and displays the individual's total citation count, h-index, and i10-index. According to Google, "three-quarters of Scholar search results pages ... show links to the authors' public profiles" as of August 2014. Through its "Related articles" feature, Google Scholar presents a list of closely related articles, ranked primarily by how similar these articles are to the original result, but also taking into account the relevance of each paper. Google Scholar's legal database of US cases is extensive. Users can search and read published opinions of US state appellate and supreme court cases since 1950, US federal district, appellate, tax, and bankruptcy courts since 1923 and US Supreme Court cases since 1791. Google Scholar embeds clickable citation links within the case and the How Cited tab allows lawyers to research prior case law and the subsequent citations to the court decision. Ranking algorithm While most academic databases and search engines allow users to select one factor (e.g. relevance, citation counts, or publication date) to rank results, Google Scholar ranks results with a combined ranking algorithm in a "way researchers do, weighing the full text of each article, the author, the publication in which the article appears, and how often the piece has been cited in other scholarly literature". Research has shown that Google Scholar puts high weight especially on citation counts, as well as words included in a document's title. In searches by author or year, the first search results are often highly cited articles, as the number of citations is highly determinant, whereas in keyword searches the number of citations is probably the factor with the most weight, but other factors also participate. Limitations and criticism Some searchers found Google Scholar to be of comparable quality and utility to subscription-based databases when looking at citations of articles in some specific journals. The reviews recognize that its "cited by" feature in particular poses serious competition to Scopus and Web of Science. A study looking at the biomedical field found citation information in Google Scholar to be "sometimes inadequate, and less often updated". The coverage of Google Scholar may vary by discipline compared to other general databases. Google Scholar strives to include as many journals as possible, including predatory journals, which may lack academic rigor. Specialists on predatory journals say that these kinds of journals "have polluted the global scientific record with pseudo-science" and "that Google Scholar dutifully and perhaps blindly includes in its central index." Google Scholar does not publish a list of journals crawled or publishers included, and the frequency of its updates is uncertain. Bibliometric evidence suggests Google Scholar's coverage of the sciences and social sciences is competitive with other academic databases; as of 2017, Scholar's coverage of the arts and humanities has not been investigated empirically and Scholar's utility for disciplines in these fields remains ambiguous. Especially early on, some publishers did not allow Scholar to crawl their journals. Elsevier journals have been included since mid-2007, when Elsevier began to make most of its ScienceDirect content available to Google Scholar and Google's web search. However, a 2014 study estimates that Google Scholar can find almost 90% (approximately 100 million) of all scholarly documents on the Web written in English. Large-scale longitudinal studies have found between 40 and 60 percent of scientific articles are available in full text via Google Scholar links. Google Scholar puts high weight on citation counts in its ranking algorithm and therefore is being criticized for strengthening the Matthew effect; as highly cited papers appear in top positions they gain more citations while new papers hardly appear in top positions and therefore get less attention by the users of Google Scholar and hence fewer citations. Google Scholar effect is a phenomenon when some researchers pick and cite works appearing in the top results on Google Scholar regardless of their contribution to the citing publication because they automatically assume these works' credibility and believe that editors, reviewers, and readers expect to see these citations. Google Scholar has problems identifying publications on the arXiv preprint server correctly. Interpunctuation characters in titles produce wrong search results, and authors are assigned to wrong papers, which leads to erroneous additional search results. Some search results are even given without any comprehensible reason. Google Scholar is vulnerable to spam. Researchers from the University of California, Berkeley and Otto-von-Guericke University Magdeburg demonstrated that citation counts on Google Scholar can be manipulated and complete non-sense articles created with SCIgen were indexed within Google Scholar. These researchers concluded that citation counts from Google Scholar should be used with care, especially when used to calculate performance metrics such as the h-index or impact factor, which is in itself a poor predictor of article quality. Google Scholar started computing an h-index in 2012 with the advent of individual Scholar pages. Several downstream packages like Harzing's Publish or Perish also use its data. The practicality of manipulating h-index calculators by spoofing Google Scholar was demonstrated in 2010 by Cyril Labbe from Joseph Fourier University, who managed to rank "Ike Antkare" ahead of Albert Einstein by means of a large set of SCIgen-produced documents citing each other (effectively an academic link farm). As of 2010, Google Scholar was not able to shepardize case law, as Lexis could. Unlike other indexes of academic work such as Scopus and Web of Science, Google Scholar does not maintain an Application Programming Interface that may be used to automate data retrieval. Use of web scrapers to obtain the contents of search results is also severely restricted by the implementation of CAPTCHAs. Google Scholar does not display or export Digital Object Identifiers (DOIs), a de facto standard implemented by all major academic publishers to uniquely identify and refer to individual pieces of academic work. In 2024, researchers found that Google Scholar was manipulatable through citation-purchasing services. Search engine optimization for Google Scholar Search engine optimization (SEO) for traditional web search engines such as Google has been popular for many years. For several years, SEO has also been applied to academic search engines such as Google Scholar. SEO for academic articles is also called "academic search engine optimization" (ASEO) and defined as "the creation, publication, and modification of scholarly literature in a way that makes it easier for academic search engines to both crawl it and index it". ASEO has been adopted by several organizations, among them Elsevier, OpenScience, Mendeley, and SAGE Publishing, to optimize their articles' rankings in Google Scholar. ASEO has been criticised for allowing journals to artificially inflate their metrics and introducing spam into academic search engines. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Paraguay] | [TOKENS: 2789] |
Contents History of the Jews in Paraguay The history of the Jews in Paraguay has been characterised by migration of Jewish people, mainly from European countries, to the South American nation, and has resulted in the Jewish Paraguayan community numbering 1,000 today. Migration began primarily from Europe in the late 19th century, where the first waves of Jewish immigrants to Paraguay came from countries such as France and Italy. This was largely a result of liberal immigration policies after the Paraguayan War, in which decimated Paraguay's prewar population. During the 1920s, Jews from Poland and Ukraine arrived in Paraguay, and in the 1930s there was a wave of mass immigration of approximately 20,000 Jews from Germany. Jewish immigration to Paraguay increased during World War II, as many sought temporary refuge in the nation before attempting to seek entry into neighbouring countries, such as Argentina and Brazil. Following World War II, Israel and Paraguay opened diplomatic relations in 1949, however in 1970, the Israeli Embassy in Asunción was attacked. This event was largely attributable to the Arab-Israeli conflict, which had a profound impact on the Paraguayan Jewish community. The Jewish community, who mostly reside in the capital Asunción, has ultimately had a significant influence on the Paraguayan community, both culturally and politically. There have been various political disagreements between the Paraguayan Jewish community and Israel, which have affected Paraguayan-Israel relations. In terms of cultural influence, the Jewish Paraguayan community has established various synagogues in Asunción. Furthermore, various literature and films have been created to depict Jewish European immigration to Paraguayan, many of them made since 2005. Migration history overview In the late 19th century, Jewish immigrants arrived in Paraguay from European countries such as Italy and France. During World War I, Jews from Palestine (Jerusalem), Egypt and Turkey arrived in Paraguay, mostly Sephardi Jews. In the 1920s, there was a second wave of immigrants from Ukraine and Poland. Between 1933 and 1939, Jews from Germany, Austria and Czechoslovakia took advantage of Paraguay's liberal immigration laws to escape from Nazi-occupied Europe. After World War II, most Jews that arrived in Paraguay were survivors of concentration camps. In the 1960s, approximately 40,000 Germans and their descendants, a majority of whom were Nazi supporters and some of whom were prominent Nazi figures, were temporarily living in Paraguay. For instance, infamous Nazi doctor Josef Mengele also temporarily lived in the country. Today, the majority of the Paraguayan Jewish community is of Ashkenazi background. First Jewish arrivals - late nineteenth century immigration Paraguay has been a long-time supporter of Jewish people and their rights. In 1881, Paraguayan media published news about the persecution of Jewish people in Europe, raising awareness of widespread discrimination. Paraguay has also had a liberal immigration policy since the 1870s, as a result of the Paraguayan War, also known as the Triple Alliance Conflict. The war of the Triple Alliance (1864-1870) was waged by Brazil, Argentina and Uruguay against Paraguay. The British government supported the allies in this war with economic and military resources. By the war's conclusion in 1870, Paraguay's political and economic framework was significantly weakened. Its strength as an independent nation was also severely impacted as it permanently lost territory around the Gran Chaco area. Furthermore, the conflict resulted in two-thirds of Paraguay's citizens perishing. As such, after peace was attained, in order to encourage immigration and recover from large population losses, the Paraguayan government created a clause in their 1870 Constitution that offered religious freedom in the territory. These factors, specifically the country's liberal immigration policy and 1870 constitution clause, culminated in an increase in Jews seeking refuge in Paraguay. As such, in the 1890s, Jewish people emigrated initially from France and Italy to seek temporary or permanent residency in Paraguay, seeing an opportunity to escape discrimination in Europe. Paraguay has historically acted as a temporary destination for many Jewish migrants seeking to gain entry into other South American nations, such as Brazil, Argentina and Uruguay. This was due to stricter immigration policies in these neighbouring countries during the 19th century, which caused some Jewish immigrants to permanently remain in Paraguay and as a result, they established a community in its capital, Asunción. This was unlike other immigration patterns within South America, such as in Argentina and Brazil, where a majority of Jewish migrants worked in rural areas and in agricultural colonies, rather than in cities. World War immigration Paraguay continued with its liberal immigration policies during and after World War I. It is estimated between 15,000 and 20,000 Jewish people from Poland, Ukraine, Germany and Czechoslovakia temporarily sought refuge in Paraguay during World War I and throughout the early 1920s. The Jewish people who did immigrate to countries within South America, and in particular Paraguay, were of a lower socio-economic status. Sephardi Jews chose to migrate to Latin America in higher numbers than Ashkenazi Jews, whose community preferred to immigrate to the United States and Canada. The Jews who migrated to Paraguay and other South American countries during the early 20th century were mainly Sephardic Jews from Europe and Palestine (Jerusalem) as well as Turkey. They chose to immigrate for reasons of discrimination within their own homelands, but also to escape military conscription. Another contributing factor that encouraged migration to Paraguay were the lower barriers to entry compared to, for instance, North America and neighbouring South American countries. For instance, Paraguay did not require immigrants to have visas, and granted them free work permits. The rapid influx of Jewish refugees into Paraguay during the early 20th century was also related to quotas on immigration during the Great Depression, which were enforced by Dominion governments and forced Jewish immigrants to seek refuge elsewhere. For instance, Canada admitted only 25,000 Jewish immigrants between 1921 and 1931, compared to 120,000 between 1891 and 1921. This caused many Jewish refugees to seek temporary and permanent refuge in South American countries, such as Paraguay. During the interwar period, permanent Jewish immigration to Paraguay was lower compared to other South American countries, such as Argentina, which had 210,000 Jewish residents by 1931. This is largely because Paraguay lacked infrastructure and political stability, and thus was not the first preference for many Jewish immigrants seeking permanent refuge. In 1933, the Nazi regime came to power in Germany. The regime held strong anti-Semitic, driven by an ideology that regarded Jewish people as 'enemies against the state'. There were exclusionary policies along with pogroms, such as Kristallnacht – a Nazi-organised riot in 1938, which had the aim of expelling Jews. The Nazis also used extermination camps, such as Auschwitz, between 1941 and 1942, to intern and torture Jews. This prompted mass migration of Jews out of Europe, which meant that by 1942 there were 3,000 Jewish immigrants who had permanently settled in Paraguay, an increase from 1,200 in 1930. Many Jewish people sought both permanent and temporary refuge in Paraguay as countries such as Argentina and Brazil had tightened immigration restrictions. For instance, Argentina accepted 2,221 Jewish immigrants between 1939 and 1941. However, it is estimated that approximately 8,270 Jews entered the country illegally. Many of these individuals had obtained Paraguayan visas and had then illegally crossed the border into Argentina. However, not all Jewish immigrants who sought asylum in Paraguay were granted citizenship or a visa. For example, Polish Jews fleeing to Brazil in 1940 a boat called the 'Cabo de Hornos' were refused entry due to tightened immigration restrictions. They then sought refuge in Paraguay but were denied entry due to administrative errors. This group of would-be Jewish immigrants returned to Europe. A further deterrent for Jewish immigrants at this time was the influence of the Paraguayan Fernheim Colony, composed of 2,000 German Mennonites. The Fernheim Mennonites supported the Nazi regime, anti-Semitism and saw the Jewish Paraguayan community as a threat to their faith. Josef Mengele, the Nazi physician, is thought to have originally sought refuge with this Mennonite community after he fled to Paraguay following World War II. Another Paraguayan Mennonite settlement, Menno Colony, founded in Paraguay in 1870, numbered 1,800 members and was less aligned with the Nazi regime's ideologies. During the World War II period, intellectuals and political personalities within the Paraguayan Jewish community published commentaries and created local newspapers supporting Zionism and raised awareness of the discrimination against Jewish people in Europe. Furthermore, during 1942, Paraguay's government implemented greater constraints against German citizens and sympathisers within Paraguay. This was due to a report released that year by the Federal Bureau of Investigation (FBI), that identified Paraguay and other Latin American nations as a hotspot for Nazi activities. As such, Paraguay monitored German citizens living within their nation. They also prohibited the wearing of German uniforms or any forms of Nazi symbols. These actions were taken to adhere to the demands of the United States government, ultimately, in order to secure a loan. However, these actions taken by the government also benefitted the Paraguayan Jewish community. As a result of this activism and support of a Jewish state, the first diplomatic representative of Israel arrived in Paraguay in 1950. Twentieth-century political disputes The Paraguayan Jewish community was impacted by the Arab–Israeli tensions during the 20th century. On 4 May 1970, a day after a ceremony was held by the Jewish Paraguayan community in Asunción to honour the victims of Nazis, a shooting occurred at the Israeli embassy in Asunción. A Jewish Paraguayan employee of the embassy was killed, and another injured. The two Palestinian assailants fled, with media and the Israeli ambassador labelling the shooting 'an attack against Israel'. However, the Jewish Representative Council in Paraguay condemned the attempt to transfer 'struggling Arab and Israel relations' onto the Paraguayan Jewish community. The Paraguayan Jewish Council did not wish to affiliate the 1967 Arab-Israeli conflict with Paraguayan Jews, stating it could affect their independence and ability to act as their own neutral Jewish state. Jewish influence in Paraguay In 2018, Paraguay became the second country in the world to move its Israeli embassy to Jerusalem. However, that year there was a change in presidency in Paraguay from Horacio Cartes to Mario Benítez, and the embassy was reinstated to Tel Aviv. This led to Israel closing its embassy in Paraguay. In 1917, the first Synagogue was established in Asunción by the Jewish Paraguayan community. Currently there are three synagogues in Paraguay, all located in its capital of Asunción. The synagogues are for the Ashkenazi, Sephardi and Chabad communities. Statistics as of 2019 indicate that of the 6.9 million Paraguayans, there are approximately 1,000 Jewish citizens, known as the 'core Jewish population' – with both parents of Jewish heritage. There are also approximately 300 Paraguayans with one parent who is Jewish. A majority of the Jewish citizens in Paraguay are Ashkenazi and live in the capital of Asunción. The number of Jewish Paraguayans has decreased since 1967, when there were 1,200 core Jewish Paraguayans. This decline in the core Jewish population is largely a result of emigration by members of the Jewish Paraguayan community. From 1948 to 2016, a total of 34 people have made Aliyah to Israel. Additionally, a further nine individuals migrated back to Israel in 2017, followed by eight more in 2018. Paraguayan Jews have had a significant influence on the domestic and international film industry. The 2019 film Passports to Paraguay depicted the migration of Jews out of Europe during the 1940s who were seeking refuge in South American countries such as Paraguay. Literature has also been written to depict the journey of European Jews to Paraguay. 'Barrio Palestina' by Susana Gertopan, 2005, narrates the story of a Polish Jewish family who emigrated to Paraguay during World War II. The novel highlights how many Jewish people first attempted to seek refuge in Argentina, particularly Buenos Aires, but eventually found themselves in Paraguay due to Argentina's strict immigration restrictions. The Paraguayan Jewish community has also influenced education within Paraguay. After migration to Paraguay, specifically during World War II, Jewish immigrants faced socio-economic difficulties. However, greater educational opportunities during the late 1950s allowed for upward mobility. This transformed the Paraguayan Jewish community into a middle and upper-middle class demographic. An example of increased educational opportunities after World War II was the Jewish Paraguayan School, Escuela Integral Estado de Israel, which opened in 1959. Since then, it has provided an education based on Jewish values and teachings and was restructured in 2009 to accept enrolments from all Paraguayan students. It is estimated that approximately 70 per cent of the Jewish children within Paraguay attend this school. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Triadic_relation] | [TOKENS: 807] |
Contents Ternary relation In mathematics, a ternary relation or triadic relation is a finitary relation in which the number of places in the relation is three. Ternary relations may also be referred to as 3-adic, 3-ary, 3-dimensional, or 3-place. Just as a binary relation is formally defined as a set of pairs, i.e. a subset of the Cartesian product A × B of some sets A and B, so a ternary relation is a set of triples, forming a subset of the Cartesian product A × B × C of three sets A, B and C. An example of a ternary relation in elementary geometry involves triples of points. In this case, a triple (A,B,C) is in the relation if the three points are collinear—that is, they lie on the same straight line. Another geometric example of a ternary relation considers triples consisting of two points and a line. Here, a triple (A,B,ℓ) belongs to the relation if the line ℓ passes through both points A and B; in other words, if the two points determine or are incident with the line. Examples A function f : A × B → C in two variables, mapping two values from sets A and B, respectively, to a value in C associates to every pair (a,b) in A × B an element f(a, b) in C. Therefore, its graph consists of pairs of the form ((a, b), f(a, b)). Such pairs in which the first element is itself a pair are often identified with triples. This makes the graph of f a ternary relation between A, B and C, consisting of all triples (a, b, f(a, b)), satisfying a in A, b in B, and f(a, b) in C. Given any set A whose elements are arranged on a circle, one can define a ternary relation R on A, i.e. a subset of A3 = A × A × A, by stipulating that R(a, b, c) holds if and only if the elements a, b and c are pairwise different and when going from a to c in a clockwise direction one passes through b. For example, if A = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 } represents the hours on a clock face, then R(8, 12, 4) holds and R(12, 8, 4) does not hold. The ordinary congruence of arithmetics which holds for three integers a, b, and m if and only if m divides a − b, formally may be considered as a ternary relation. However, usually, this instead is considered as a family of binary relations between the a and the b, indexed by the modulus m. For each fixed m, indeed this binary relation has some natural properties, like being an equivalence relation; while the combined ternary relation in general is not studied as one relation. A typing relation Γ ⊢ e:σ indicates that e is a term of type σ in context Γ, and is thus a ternary relation between contexts, terms and types. Given homogeneous relations A, B, and C on a set, a ternary relation (A, B, C) can be defined using composition of relations AB and inclusion AB ⊆ C. Within the calculus of relations each relation A has a converse relation AT and a complement relation A. Using these involutions, Augustus De Morgan and Ernst Schröder showed that (A, B, C) is equivalent to (C, BT, A) and also equivalent to (AT, C, B). The mutual equivalences of these forms, constructed from the ternary relation (A, B, C), are called the Schröder rules. References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Brookings_Report] | [TOKENS: 835] |
Contents Brookings Report Proposed Studies on the Implications of Peaceful Space Activities for Human Affairs, often referred to as "the Brookings Report", was a 1960 report commissioned by NASA and created by the Brookings Institution in collaboration with NASA's Committee on Long-Range Studies. It was submitted to the House Committee on Science and Astronautics of the United States House of Representatives in the 87th United States Congress on April 18, 1961. Significance The report has become noted for one short section entitled "The implications of a discovery of extraterrestrial life", which examines the potential implications of such a discovery on public attitudes and values. The section briefly considers possible public reactions to some possible scenarios for the discovery of extraterrestrial life, stressing a need for further research in this area. It recommended continuing studies to determine the likely social impact of such a discovery and its effects on public attitudes, including study of the question of how leadership should handle information about such a discovery and under what circumstances leaders might or might not find it advisable to withhold such information from the public. The significance of this section of the report is a matter of controversy. Persons who believe that extraterrestrial life has already been confirmed and that this information is being withheld by government from the public sometimes turn to this section of the report as support for their view. Frequently cited passages from this section of the report are drawn both from its main body and from its footnotes. The report has been mentioned in newspapers such as The New York Times, The Baltimore Sun, The Washington Times, and the Huffington Post. Background and context The report was entered into the Congressional Record, which is currently archived at over 1110 libraries as part of the Federal Depository Library Program. The main author Donald N. Michael was a "social psychologist with a background in the natural sciences." "He was a fellow of the American Association for the Advancement of Science, the American Psychological Association, the Society for the Psychological Study of Social Issues and the World Academy of Art and Science." Over 50 years after the report was initially released the Brookings Institution again focused on space policy by hosting "several panels of experts to discuss topics such as the economic benefits of private industry’s involvement, the scientific discoveries resulting from NASA’s continued space efforts and the potential for future exploration, and the government’s policies and decision making process." Content Although the report discusses the need for research on many policy issues related to space exploration, it is most often cited for passages from its brief section on the implications of a discovery of extraterrestrial life. (See Section #Use in discussions about possible cover-ups) The report contains the following chapters:: 5 Use in discussions about possible cover-ups The report is sometimes mentioned in discussions about possible government cover-ups of evidence of extraterrestrial life, such as discussions under blog entries of skeptic astronomer Phil Plait. Sometimes these mentions point out the existence of the report, sometimes they argue that the report is evidence of extraterrestrial life. For example, Richard C. Hoagland, a proponent of conspiracy theories, argues that the report, by outlining plausible motives for government suppression of a discovery of extraterrestrial intelligence, furnishes evidence of an ongoing cover-up of intelligent extraterrestrial life already discovered. The National Investigations Committee On Aerial Phenomena thinks the "report gives weight to previous thinking by scholars who have suggested that the earth already may be under close scrutiny by advanced space races." In an email published by The Virtually Strange Network, entitled "Brookings Report Re-examined", Keith Woodard writes that the Brookings Report: ...did raise the possibility of withholding information, but took no position on its advisability. 'Questions one might wish to answer by such studies,' intoned the report, 'would include: how might such information, under what circumstances, be presented to or withheld from the public for what ends? What might be the role of the discovering scientists and other decision makers regarding release of the fact of discovery?' Those two sentences comprise the report's entire commentary on the subject of covering up the truth. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:Articles_with_limited_geographic_scope_from_February_2020] | [TOKENS: 95] |
Category:Articles with limited geographic scope from February 2020 This category combines all articles with limited geographic scope from February 2020 (2020-02) to enable us to work through the backlog more systematically. It is a member of Category:Articles with limited geographic scope. Pages in category "Articles with limited geographic scope from February 2020" The following 19 pages are in this category, out of 19 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-:7_363-1] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Black_hole&printable=yes] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Online_university] | [TOKENS: 2099] |
Contents Online university A virtual university (or online university) provides higher education programs through electronic media, typically the Internet. Some are bricks-and-mortar institutions that provide online learning as part of their extended university courses while others solely offer online courses. They are regarded as a form of distance education. The goal of virtual universities is to provide access to the part of the population who would not be able to attend a physical campus, for reasons such as distance—in which students live too far from a physical campus to attend regular classes; and the need for flexibility—some students need the flexibility to study at home whenever it is convenient for them to do so. Some of these organizations exist only as loosely tied combines of universities, institutes or departments that together provide a number of courses over the Internet, television or other media, that are separate and distinct from programs offered by the single institution outside of the combine. Others are individual organizations with a legal framework, yet are called "virtual" because they appear only on the Internet, without a physical location aside from their administration units. Still other virtual universities can be organized through specific or multiple physical locations, with or without actual campuses to receive program delivery through technological media that is broadcast from another location where professors give televised lectures. Program delivery in a virtual university is administered through Information and communications technology such as web pages, e-mail and other networked sources. As virtual universities are relatively new and vary widely, questions remain about accreditation and the quality of assessment. History The defining characteristic of all forms and generations of distance education is the separation of student and teacher in time and space. Distance education can be seen as the precursor to online learning. Before the advent of virtual universities, many higher education institutions offered some distance education through print-based correspondence courses. These courses were often referred to as a "course in a box". These have been developed so that students can obtain almost immediate feedback from professors and online tutors through e-mails or online discussions. When the term "virtual" was first coined in the computational sense, it applied to things that were simulated by the computer, like virtual memory. Over time, the adjective has been applied to things that physically exist and are created or carried on by means of computers. The Open University in the United Kingdom was the world’s first successful distance teaching university. It was founded in the 1960s on the belief that communications technology could bring high quality degree-level learning to people who had not had the opportunity to attend campus universities. The idea for a "wireless university" was first discussed at the BBC (British Broadcasting Corporation) by the educationalist and historian J.C. Stobbart. From these early beginnings, more ideas came forth until finally the Labour Party under the leadership of Harold Wilson formed an advisory committee to establish an Open University. With the goal of bringing higher education to all those who wanted to access it, the committee came up with various scenarios before settling on the name Open University. The first idea floated in the UK was to have a "teleuniversity" which would combine broadcast lectures with correspondence texts and visits to conventional universities. In the "teleuniversity" scenario courses are taught on the radio and television and in fact many universities adopted the use of this technology for their distance education courses. The name "teleuniversity" morphed into the "University of Air" which still had the same goal of reaching the lower-income groups who did not have access to higher education. The name "University of Air" did not stick and by the time the first students were admitted in January 1971 the name had become what it is today "Open University". OU proved that it was possible to teach university-level courses to students at a distance. By 1980, total student numbers at OU had reached 70,000 and some 6,000 people were graduating each year. The 1980s saw increased expansion continue as more courses and subject areas were introduced; as the importance of career development grew, so the university began to offer professional training courses alongside its academic programmes. By the mid-nineties, the OU was using the internet. As of 2008, more than 180,000 students were interacting with OU online from home. The idea of a virtual university as an institution that used computers and telecommunications instead of buildings and transport to bring students and teachers together for university courses was first published in works like "De-Schooling Society" by Ivan Illich that introduced the concept of the use of computer networks as switchboards for learning, in 1970.[citation needed] In 1971 George Kasey, a media(activist)ethicist, delivered a series of lectures on "the Philosophy of Communications De-Design" under the sponsorship of Phil Jacklin PhD, professor at University of California San Jose, a member of "The (San Francisco)Bay Area Committee for Open Media and Public Access." The lectures contained the theoretical outlines for use of telecommunications and media for de-schooling and de-design of mainstream education and an alternative Virtual Free University system. By 1972 George Kasey established "Media Free Times - periodical Multimedia Random Sampling of Anarchic Communications Art" a prototype for remote learning with the use of "multi-media periodicals," that are now commonly referred to as "web pages".[citation needed] In 1995 by John Tiffin and Lalita Rajasingham in their book "In Search Of the Virtual Class: Education in an Information Society" (London and New York, Routledge). It was based on a joint research project at Victoria University of Wellington that ran from 1986-1996.[citation needed] Called the virtual class laboratory it used dedicated telecommunication systems to make it possible for students to attend class virtually or physically and was at first supported by a number of telecommunication organisations. Its purpose was to seek the critical factors in using ICT for university-level education. In 1992 the virtual class lab moved onto the Internet. A number of other universities were involved in the late eighties in pioneering initiatives and experiments were conducted between Victoria University in New Zealand, the University of Hawaii, Ohio State University and Waseda University to try and conduct classes and courses at an international level via telecommunications. This led to the concept of a Global Virtual University. Coursework Providing access to higher education for all students, especially adult learners, is made easier by the fact that most virtual universities have no entry requirements for their undergraduate courses. Entry requirements are needed for the courses that are aimed at postgraduates or those who work in specific jobs. Studying in a virtual university has essential differences from studying in a brick and mortar university. There are no buildings and no campus to go to because students receive learning materials over the Internet. In most cases, only a personal computer and an Internet connection are needed that traditionally required physical presence of students in the classroom. Course materials can include printed material, books, audio and video cassettes, TV programmes, CD-ROM/software, and web sites. Support is offered to learners from the professor or a tutor online through e-mails if they are having problems with the course. Taking courses online means that students will be learning in their own time by reading course material, working on course activities, writing assignments and perhaps working with other students through interactive teleconferences. Online learning can be an isolating experience since the student spends the majority of their time working by themselves. Some learners do not mind this kind of solo learning, but others find it a major stumbling block to the successful completion of courses. Because of the potential difficulty of maintaining the schedule needed to be successful when learning online, some virtual universities apply the same type of time management as traditional schools. Many courses operate to a timetable, which the student receives with the course materials. These may include the planned activities for each week of the course and due dates for the assignments. If the course has an exam, the students will be informed where they have to go to write it. An example of a university that maintains a tight schedule is the Virtual Global University (VGU) in Germany. VGU offers a graduate program "International Master of Business Informatics" (MBI)—a master program in information technology and management that takes an average of four semesters to complete (for full-time students). Each course has a lecture or a virtual class meeting every week. Afterwards, students get a homework assignment; for example, they have to solve an exercise, elaborate on some problem, discuss a case study, or take a test. Lecturers give them immediate feedback, and one week later, the same happens again. Coursework can be the same for a Virtual University as the On-campus University in certain cases. NYU Tandon Online, for example, provides the same course work to its online students as the on-campus students at the NYU Tandon School of Engineering. This is done using advanced technologies. Teaching modes When online courses first began, the primary mode of delivery was through a two way audio-visual network. Then as well as now, many of the virtual study programs were mainly based on text documents, but multimedia technologies have become increasingly popular as well. These web-based delivery modes are used in order to expand access to programs and services that can be offered anytime and anywhere. The spectrum of teaching modes in virtual education includes courses based on hypertext, videos, audios, e-mails, and video conferencing. Teaching on the web through courseware such as WebCT and Blackboard are also used. See Virtual education. Quality Students taking "virtual" courses are doing real work to get their degrees, and educators preparing and teaching those courses spend real time in doing so. That is, students meet a comparable level of academic learning outcomes and are evaluated through programs constructed according to standard university-level criteria.[clarification needed] Though it should not be assumed, virtual universities may be accredited in the same way as traditional universities and operate according to a similar set of academic standards. However, questions remain about accreditation and the quality of assessment. Accreditation is required to assure students that the online institute has certified online instructors who have the expertise and educational qualifications to design and carry out the curriculum. Assessment standards need to be particularly closely monitored in virtual universities. For example, respondents in studies of opinions about online degrees will rate an online degree from Stanford the same as an on-campus degree, because the name of the granting institution is recognized. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dialogue_tree] | [TOKENS: 931] |
Contents Dialogue tree A dialogue tree, or conversation tree, is a gameplay mechanic that is used throughout many adventure games (including action-adventure games) and role-playing video games. When interacting with a non-player character, the player is given a choice of what to say and makes subsequent choices until the conversation ends. Certain video game genres, such as visual novels and dating sims, revolve almost entirely around these character interactions and branching dialogues. History The concept of the dialogue tree has existed long before the advent of video games. The earliest known dialogue tree is described in "The Garden of Forking Paths", a 1941 short story by Jorge Luis Borges, in which the combination book of Ts'ui Pên allows all major outcomes from an event branch into their own chapters. Much like the game counterparts this story reconvenes as it progresses (as possible outcomes would approach nm where n is the number of options at each fork and m is the depth of the tree). The first computer dialogue system was featured in ELIZA, a primitive natural language processing computer program written by Joseph Weizenbaum between 1964 and 1966. The program emulated interaction between the user and an artificial therapist. With the advent of video games, interactive entertainment have attempted to incorporate meaningful interactions with virtual characters. Branching dialogues have since become a common feature in visual novels, dating sims, adventure games, and role-playing video games. Game mechanics The player typically enters the gameplay mode by choosing to speak with a non-player character (or when a non-player character chooses to speak to them), and then choosing a line of pre-written dialog from a menu. Upon choosing what to say, the non-player character responds to the player, and the player is given another choice of what to say. This cycle continues until the conversation ends. The conversation may end when the player selects a farewell message, the non-player character has nothing more to add and ends the conversation, or when the player makes a bad choice (perhaps angering the non-player to leave the conversation). Games often offer options to ask non-players to reiterate information about a topic, allowing players to replay parts of the conversation that they did not pay close enough attention to the first time. These conversations are said to be designed as a tree structure, with players deciding between each branch of dialog to pursue. Unlike a branching story, players may return to earlier parts of a conversation tree and repeat them. Each branch point (or node) is essentially a different menu of choices, and each choice that the player makes triggers a response from the non-player character followed by a new menu of choices. In some genres such as role-playing video games, external factors such as charisma may influence the response of the non-player character or unlock options that would not be available to other characters. These conversations can have far-reaching consequences, such as deciding to disclose a valuable secret that has been entrusted to the player. However, these are usually not real tree data structure in programmers sense, because they contain cycles as can be seen on illustration on this page. Certain game genres revolve almost entirely around character interactions, including visual novels such as Ace Attorney and dating sims such as Tokimeki Memorial, usually featuring complex branching dialogues and often presenting the player's possible responses word-for-word as the player character would say them. Games revolving around relationship-building, including visual novels, dating sims such as Tokimeki Memorial, and some role-playing games such as Shin Megami Tensei: Persona, often give choices that have a different number of associated "mood points" which influence a player character's relationship and future conversations with a non-player character. These games often feature a day-night cycle with a time scheduling system that provides context and relevance to character interactions, allowing players to choose when and if to interact with certain characters, which in turn influences their responses during later conversations. Some games use a real-time conversation system, giving the player only a few seconds to respond to a non-player character, such as Sega's Sakura Wars and Alpha Protocol. Another variation of branching dialogues can be seen in the adventure game Culpa Innata, where the player chooses a tactic at the beginning of a conversation, such as using either a formal, casual or accusatory manner, that affects the tone of the conversation and the information gleaned from the interviewee. Value and impact This mechanism allows game designers to provide interactive conversations with nonplayer characters without having to tackle the challenges of natural language processing in the field of artificial intelligence. In games such as Monkey Island, these conversations can help demonstrate the personality of certain characters. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Shallow_copy] | [TOKENS: 2432] |
Contents Object copying In object-oriented programming, object copying is creating a copy of an existing object, a unit of data in object-oriented programming. The resulting object is called an object copy or simply copy of the original object. Copying is basic but has subtleties and can have significant overhead. There are several ways to copy an object, most commonly by a copy constructor or cloning. Copying is done mostly so the copy can be modified or moved, or the current value preserved. If either of these is unneeded, a reference to the original data is sufficient and more efficient, as no copying occurs. Objects in general store composite data. While in simple cases copying can be done by allocating a new, uninitialized object and copying all fields (attributes) from the original object, in more complex cases this does not result in desired behavior. Methods of copying The design goal of most objects is to give the resemblance of being made out of one monolithic block even though most are not. As objects are made up of several different parts, copying becomes nontrivial. Several strategies exist to treat this problem. Consider an object A, which contains fields xi (more concretely, consider if A is a string and xi is an array of its characters). There are different strategies for making a copy of A, referred to as shallow copy and deep copy. Many languages allow generic copying by one or either strategy, defining either one copy operation or separate shallow copy and deep copy operations. Note that even shallower is to use a reference to the existing object A, in which case there is no new object, only a new reference. The terminology of shallow copy and deep copy dates to Smalltalk-80. The same distinction holds for comparing objects for equality: most basically there is a difference between identity (same object) and equality (same value), corresponding to shallow equality and (1 level) deep equality of two object references, but then further whether equality means comparing only the fields of the object in question or dereferencing some or all fields and comparing their values in turn (e.g., are two linked lists equal if they have the same nodes, or if they have same values?).[clarification needed] One method of copying an object is the shallow copy. In that case a new object B is created, and the fields values of A are copied over to B. This is also known as a field-by-field copy, field-for-field copy, or field copy. If the field value is a reference to an object (e.g., a memory address) it copies the reference, hence referring to the same object as A does, and if the field value is a primitive type, it copies the value of the primitive type. In languages without primitive types (where everything is an object), all fields of the copy B are references to the same objects as the fields of original A. The referenced objects are thus shared, so if one of these objects is modified (from A or B), the change is visible in the other. Shallow copies are simple and typically cheap, as they can usually be implemented by simply copying the bits exactly. An alternative is a deep copy, meaning that fields are dereferenced: rather than references to objects being copied, new copy of objects are created for any referenced objects, and references to these are placed in B. Later modifications to the contents remain unique to A or B, as the contents are not shared. In more complex cases, some fields in a copy should have shared values with the original object (as in a shallow copy), corresponding to an "association" relationship; some fields should have copies (as in a deep copy), corresponding to an "aggregation" relationship. In these cases a custom implementation of copying is generally required; this issue and solution dates to Smalltalk-80. Alternatively, fields can be marked as requiring a shallow copy or deep copy, and copy operations automatically generated (likewise for comparison operations). This is not implemented in most object-oriented languages, however, though there is partial support in Eiffel. Implementation Nearly all object-oriented programming languages provide some way to copy objects. As most languages do not provide most objects for programs, a programmer must define how an object should be copied, just as they must define if two objects are identical or even comparable in the first place. Many languages provide some default behavior. How copying is solved varies from language to language, and what concept of an object it has. A lazy copy is an implementation of a deep copy. When initially copying an object, a (fast) shallow copy is used. A counter is also used to track how many objects share the data. When the program wants to modify an object, it can determine if the data is shared (by examining the counter) and can do a deep copy if needed. Lazy copy looks to the outside just as a deep copy, but takes advantage of the speed of a shallow copy whenever possible. The downside are rather high but constant base costs because of the counter. Also, in certain situations, circular references can cause problems. Lazy copy is related to copy-on-write. In C++, a lazy copy occurs by default when invoking the copy constructor. The following presents examples for one of the most widely used object-oriented languages, Java, which should cover nearly every way that an object-oriented language can treat this problem. Unlike in C++, objects in Java are always accessed indirectly through references. Objects are never created implicitly but instead are always passed or assigned by a reference variable. (Methods in Java are always pass by value, however, it is the value of the reference variable that is being passed.) The Java Virtual Machine manages garbage collection so that objects are cleaned up after they are no longer reachable. There is no automatic way to copy any given object in Java. Copying is usually performed by a clone() method of a class. This method usually, in turn, calls the clone() method of its parent class to obtain a copy, and then does any custom copying procedures. Eventually this gets to the clone() method of Object (the uppermost class), which creates a new instance of the same class as the object and copies all the fields to the new instance (a "shallow copy"). If this method is used, the class must implement the Cloneable marker interface, or else it will throw a "Clone Not Supported Exception". After obtaining a copy from the parent class, a class' own clone() method may then provide custom cloning capability, like deep copying (i.e. duplicate some of the structures referred to by the object) or giving the new instance a new unique ID. The return type of clone() is Object, but implementers of a clone method could write the type of the object being cloned instead due to Java's support for covariant return types. One advantage of using clone() is that since it is an overridable method, we can call clone() on any object, and it will use the clone() method of its class, without the calling code needing to know what that class is (which would be needed with a copy constructor). A disadvantage is that one often cannot access the clone() method on an abstract type. Most interfaces and abstract classes in Java do not specify a public clone() method. Thus, often the only way to use the clone() method is if the class of an object is known, which is contrary to the abstraction principle of using the most generic type possible. For example, if one has a List reference in Java, one cannot invoke clone() on that reference because List specifies no public clone() method. Implementations of List like Array List and Linked List all generally have clone() methods, but it is inconvenient and bad abstraction to carry around the class type of an object. Another way to copy objects in Java is to serialize them through the Serializable interface. This is typically used for persistence and wire protocol purposes, but it does create copies of objects and, unlike clone, a deep copy that gracefully handles cycled graphs of objects is readily available with minimal effort from a programmer. Both of these methods suffer from a notable problem: the constructor is not used for objects copied with clone or serialization. This can lead to bugs with improperly initialized data, prevents the use of final member fields, and makes maintenance challenging. Some utilities attempt to overcome these issues by using reflection to deep copy objects, such as the deep-cloning library. Runtime objects in Eiffel are accessible either indirectly through references or as expanded objects which fields are embedded within the objects that use them. That is, fields of an object are stored either externally or internally. The Eiffel class ANY contains features for shallow and deep copying and cloning of objects. All Eiffel classes inherit from ANY, so these features are available within all classes, and are applicable both to reference and expanded objects. The copy feature effects a shallow, field-by-field copy from one object to another. In this case no new object is created. If y were copied to x, then the same objects referenced by y before the application of copy, will also be referenced by x after the copy feature completes. To effect the creation of a new object which is a shallow duplicate of y, the feature twin is used. In this case, one new object is created with its fields identical to those of the source. The feature twin relies on the feature copy, which can be redefined in descendants of ANY, if needed. The result of twin is of the anchored type like Current. Deep copying and creating deep twins can be done using the features deep_copy and deep_twin, again inherited from class ANY. These features have the potential to create many new objects, because they duplicate all the objects in an entire object structure. Because new duplicate objects are created instead of simply copying references to existing objects, deep operations will become a source of performance issues more readily than shallow operations. In C#, rather than using the interface ICloneable, a generic extension method can be used to create a deep copy using reflection. This has two advantages: First, it provides the flexibility to copy every object without having to specify each property and variable to be copied manually. Second, because the type is generic, the compiler ensures that the destination object and the source object have the same type. In Objective-C, the methods copy and mutableCopy are inherited by all objects and intended for performing copies; the latter is for creating a mutable type of the original object. These methods in turn call the copyWithZone and mutableCopyWithZone methods, respectively, to perform the copying. An object must implement the corresponding copyWithZone method to be copyable. In OCaml, the library function Oo.copy performs shallow copying of an object. In Python, the library's copy module provides shallow copy and deep copy of objects through the copy() and deepcopy() functions, respectively. Programmers may define special methods __copy__() and __deepcopy__() in an object to provide custom copying implementation. In Ruby, all objects inherit two methods for performing shallow copies, clone and dup. The two methods differ in that clone copies an object's tainted state, frozen state, and any singleton methods it may have, whereas dup copies only its tainted state. Deep copies may be achieved by dumping and loading an object's byte stream or YAML serialization. Alternatively, you can use the deep_dive gem to do a controlled deep copy of your object graphs. In Rust, structs can implement the clone method using the Clone trait. In Perl, nested structures are stored by the use of references, thus a developer can either loop over the entire structure and re-reference the data or use the dclone() function from the module Storable. In VBA, an assignment of variables of type Object is a shallow copy, an assignment for all other types (numeric types, String, user defined types, arrays) is a deep copy. So the keyword Set for an assignment signals a shallow copy and the (optional) keyword Let signals a deep copy. There is no built-in method for deep copies of Objects in VBA. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Occam-%CF%80] | [TOKENS: 141] |
Contents occam-π In computer science, occam-π (or occam-pi) is the name of a variant of the programming language occam developed by the Kent Retargetable occam Compiler (KRoC) team at the University of Kent. The name reflects the introduction of elements of π-calculus (pi-calculus) into occam, especially concepts involving mobile agents (processes) and data. The language contains several extensions to occam 2.1, including: See also References External links This programming-language-related article is a stub. You can help Wikipedia by adding missing information. This computer science article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:CS1_maint:_others] | [TOKENS: 595] |
Category:CS1 maint: others This is a tracking category for CS1 citations that use |others= without also using |author= or |editor= or any of their aliases. |others= is provided to record other (secondary) contributors to the cited source. Articles are listed in this category when Module:Citation/CS1 identifies a template that does not identify primary contributors. Pages in this category should only be added by Module:Citation/CS1. Pages with this condition are automatically placed in Category:CS1 maint: others.[a] Some templates translate their parameters with different names into values for |other= in subsequent template calls. For example |recipient= in {{cite letter}} is translated to |others= when invoking {{#invoke:template wrapper}}. Such behaviour may surprisingly cause this warning when invoking such a template. By default, Citation Style 1 and Citation Style 2 error messages are visible to all readers and maintenance messages are hidden from all readers. To display maintenance messages in the rendered article, include the following text in your common CSS page (common.css) or your specific skin's CSS page and (skin.css). (Note to new editors: those CSS pages are specific to you, and control your view of pages, by adding to your user account's CSS code. If you have not yet created such a page, then clicking one of the .css links above will yield a page that starts "Wikipedia does not have a user page with this exact name." Click the "Start the User:username/filename page" link, paste the text below, save the page, follow the instructions at the bottom of the new page on bypassing your browser's cache, and finally, in order to see the previously hidden maintenance messages, refresh the page you were editing earlier.) To display hidden-by-default error messages: Even with this CSS installed, older pages in Wikipedia's cache may not have been updated to show these error messages even though the page is listed in one of the tracking categories. A null edit will resolve that issue. After (error and/maintenance) messages are displayed, it might still not be easy to find them in a large article with a lot of citations. Messages can then be found by searching (with Ctrl-F) for "(help)" or "cs1". To hide normally-displayed error messages: You can personalize the display of these messages (such as changing the color), but you will need to ask someone who knows CSS or at the technical village pump if you do not understand how. Nota bene: these CSS rules are not obeyed by Navigation popups. They also do not hide script warning messages in the Preview box that begin with "This is only a preview; your changes have not yet been saved". Notes Pages in category "CS1 maint: others" The following 200 pages are in this category, out of approximately 13,353 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Independence_number] | [TOKENS: 2038] |
Contents Independent set (graph theory) In graph theory, an independent set, stable set, coclique or anticlique is a set of vertices in a graph, no two of which are adjacent. That is, it is a set S {\displaystyle S} of vertices such that for every two vertices in S {\displaystyle S} , there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in S {\displaystyle S} . A set is independent if and only if it is a clique in the graph's complement. The size of an independent set is the number of vertices it contains. Independent sets have also been called "internally stable sets", of which "stable set" is a shortening. A maximal independent set is an independent set that is not a proper subset of any other independent set. A maximum independent set is an independent set of largest possible size for a given graph G {\displaystyle G} . This size is called the independence number of G {\displaystyle G} and is usually denoted by α ( G ) {\displaystyle \alpha (G)} . The optimization problem of finding such a set is called the maximum independent set problem. It is a strongly NP-hard problem. As such, it is unlikely that there exists an efficient algorithm for finding a maximum independent set of a graph. Every maximum independent set also is maximal, but the converse implication does not necessarily hold. Properties A set is independent if and only if it is a clique in the graph’s complement, so the two concepts are complementary. In fact, sufficiently large graphs with no large cliques have large independent sets, a theme that is explored in Ramsey theory. A set is independent if and only if its complement is a vertex cover. Therefore, the sum of the size of the largest independent set α ( G ) {\displaystyle \alpha (G)} and the size of a minimum vertex cover β ( G ) {\displaystyle \beta (G)} is equal to the number of vertices in the graph. A vertex coloring of a graph G {\displaystyle G} corresponds to a partition of its vertex set into independent subsets. Hence the minimal number of colors needed in a vertex coloring, the chromatic number χ ( G ) {\displaystyle \chi (G)} , is at least the quotient of the number of vertices in G {\displaystyle G} and the independent number α ( G ) {\displaystyle \alpha (G)} . In a bipartite graph with no isolated vertices, the number of vertices in a maximum independent set equals the number of edges in a minimum edge covering; this is Kőnig's theorem. An independent set that is not a proper subset of another independent set is called maximal. Such sets are dominating sets. Every graph contains at most 3n/3 maximal independent sets, but many graphs have far fewer. The number of maximal independent sets in n-vertex cycle graphs is given by the Perrin numbers, and the number of maximal independent sets in n-vertex path graphs is given by the Padovan sequence. Therefore, both numbers are proportional to powers of 1.324718..., the plastic ratio. Finding independent sets In computer science, several computational problems related to independent sets have been studied. The first three of these problems are all important in practical applications; the independent set decision problem is not, but is necessary in order to apply the theory of NP-completeness to problems related to independent sets. The independent set problem and the clique problem are complementary: a clique in G is an independent set in the complement graph of G and vice versa. Therefore, many computational results may be applied equally well to either problem. For example, the results related to the clique problem have the following corollaries: Despite the close relationship between maximum cliques and maximum independent sets in arbitrary graphs, the independent set and clique problems may be very different when restricted to special classes of graphs. For instance, for sparse graphs (graphs in which the number of edges is at most a constant times the number of vertices in any subgraph), the maximum clique has bounded size and may be found exactly in linear time; however, for the same classes of graphs, or even for the more restricted class of bounded degree graphs, finding the maximum independent set is MAXSNP-complete, implying that, for some constant c (depending on the degree) it is NP-hard to find an approximate solution that comes within a factor of c of the optimum. The maximum independent set problem is NP-hard. However, it can be solved more efficiently than the O(n2 2n) time that would be given by a naive brute force algorithm that examines every vertex subset and checks whether it is an independent set. As of 2017 it can be solved in time O(1.1996n) using polynomial space. When restricted to graphs with maximum degree 3, it can be solved in time O(1.0836n). For many classes of graphs, a maximum weight independent set may be found in polynomial time. Famous examples are claw-free graphs, P5-free graphs and perfect graphs. For chordal graphs, a maximum weight independent set can be found in linear time. Modular decomposition is a good tool for solving the maximum weight independent set problem; the linear time algorithm on cographs is the basic example for that. Another important tool are clique separators as described by Tarjan. Kőnig's theorem implies that in a bipartite graph the maximum independent set can be found in polynomial time using a bipartite matching algorithm. In general, the maximum independent set problem cannot be approximated to a constant factor in polynomial time (unless P = NP). In fact, Max Independent Set in general is Poly-APX-complete, meaning it is as hard as any problem that can be approximated to a polynomial factor. However, there are efficient approximation algorithms for restricted classes of graphs. In planar graphs, the maximum independent set may be approximated to within any approximation ratio c < 1 in polynomial time; similar polynomial-time approximation schemes exist in any family of graphs closed under taking minors. In bounded degree graphs, effective approximation algorithms are known with approximation ratios that are constant for a fixed value of the maximum degree; for instance, a greedy algorithm that forms a maximal independent set by, at each step, choosing the minimum degree vertex in the graph and removing its neighbors, achieves an approximation ratio of (Δ+2)/3 on graphs with maximum degree Δ. Approximation hardness bounds for such instances were proven in Berman & Karpinski (1999). Indeed, even Max Independent Set on 3-regular 3-edge-colorable graphs is APX-complete. An interval graph is a graph in which the nodes are 1-dimensional intervals (e.g. time intervals) and there is an edge between two intervals if and only if they intersect. An independent set in an interval graph is just a set of non-overlapping intervals. The problem of finding maximum independent sets in interval graphs has been studied, for example, in the context of job scheduling: given a set of jobs that has to be executed on a computer, find a maximum set of jobs that can be executed without interfering with each other. This problem can be solved exactly in polynomial time using earliest deadline first scheduling. A geometric intersection graph is a graph in which the nodes are geometric shapes and there is an edge between two shapes if and only if they intersect. An independent set in a geometric intersection graph is just a set of disjoint (non-overlapping) shapes. The problem of finding maximum independent sets in geometric intersection graphs has been studied, for example, in the context of Automatic label placement: given a set of locations in a map, find a maximum set of disjoint rectangular labels near these locations. Finding a maximum independent set in intersection graphs is still NP-complete, but it is easier to approximate than the general maximum independent set problem. A recent survey can be found in the introduction of Chan & Har-Peled (2012). A d-claw in a graph is a set of d+1 vertices, one of which (the "center") is connected to the other d vertices, but the other d vertices are not connected to each other. A d-claw-free graph is a graph that does not have a d-claw subgraph. Consider the algorithm that starts with an empty set, and incrementally adds an arbitrary vertex to it as long as it is not adjacent to any existing vertex. In d-claw-free graphs, every added vertex invalidates at most d − 1 vertices from the maximum independent set; therefore, this trivial algorithm attains a (d − 1)-approximation algorithm for the maximum independent set. In fact, it is possible to get much better approximation ratios: The problem of finding a maximal independent set can be solved in polynomial time by a trivial parallel greedy algorithm . All maximal independent sets can be found in time O(3n/3) = O(1.4423n). The counting problem #IS asks, given an undirected graph, how many independent sets it contains. This problem is intractable, namely, it is ♯P-complete, already on graphs with maximal degree three. It is further known that, assuming that NP is different from RP, the problem cannot be tractably approximated in the sense that it does not have a fully polynomial-time approximation scheme with randomization (FPRAS), even on graphs with maximal degree six; however it does have an fully polynomial-time approximation scheme (FPTAS) in the case where the maximal degree is five. The problem #BIS, of counting independent sets on bipartite graphs, is also ♯P-complete, already on graphs with maximal degree three. It is not known whether #BIS admits a FPRAS. The question of counting maximal independent sets has also been studied. Applications The maximum independent set and its complement, the minimum vertex cover problem, is involved in proving the computational complexity of many theoretical problems. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Spinal_cord_injury] | [TOKENS: 9864] |
Contents Spinal cord injury A spinal cord injury (SCI) is damage to the spinal cord that causes temporary or permanent changes in its function. It is a destructive neurological and pathological state that causes major motor, sensory and autonomic dysfunctions. Symptoms of spinal cord injury may include loss of muscle function, sensation, or autonomic function in the parts of the body served by the spinal cord below the level of the injury. Injury can occur at any level of the spinal cord and can be complete, with a total loss of sensation and muscle function at lower sacral segments, or incomplete, meaning some nervous signals are able to travel past the injured area of the cord up to the Sacral S4-5 spinal cord segments. Depending on the location and severity of damage, the symptoms vary, from numbness to paralysis, including bowel or bladder incontinence. Long term outcomes also range widely, from full recovery to permanent tetraplegia (also called quadriplegia) or paraplegia. Complications can include muscle atrophy, loss of voluntary motor control, spasticity, pressure sores, infections, and breathing problems. In the majority of cases the damage results from physical trauma such as car accidents, gunshot wounds, falls, or sports injuries, but it can also result from nontraumatic causes such as infection, insufficient blood flow, and tumors. Just over half of injuries affect the cervical spine, while 15% occur in each of the thoracic spine, border between the thoracic and lumbar spine, and lumbar spine alone. Diagnosis is typically based on symptoms and medical imaging. Efforts to prevent SCI include individual measures such as using safety equipment, societal measures such as safety regulations in sports and traffic, and improvements to equipment. Treatment starts with restricting further motion of the spine and maintaining adequate blood pressure. Corticosteroids have not been found to be useful. Other interventions vary depending on the location and extent of the injury, from bed rest to surgery. In many cases, spinal cord injuries require long-term physical and occupational therapy, especially if it interferes with activities of daily living. In the United States, about 12,000 people annually survive a spinal cord injury. The most commonly affected group are young adult males. SCI has seen great improvements in its care since the middle of the 20th century. Research into potential treatments includes stem cell implantation, hypothermia, engineered materials for tissue support, epidural spinal stimulation, and wearable robotic exoskeletons. Classification Spinal cord injury can be traumatic or nontraumatic, and can be classified into three types based on cause: mechanical forces, toxic, and ischemic from lack of blood flow. The damage can also be divided into primary and secondary injury: the cell death that occurs immediately in the original injury, and biochemical cascades that are initiated by the original insult and cause further tissue damage. These secondary injury pathways include the ischemic cascade, inflammation, swelling, cell suicide, and neurotransmitter imbalances. They can take place for minutes or weeks following the injury. At each level of the spinal column, spinal nerves branch off from either side of the spinal cord and exit between a pair of vertebrae, to innervate a specific part of the body. The area of skin innervated by a specific spinal nerve is called a dermatome, and the group of muscles innervated by a single spinal nerve is called a myotome. The part of the spinal cord that was damaged corresponds to the spinal nerves at that level and below. Injuries can be cervical 1–8 (C1–C8), thoracic 1–12 (T1–T12), lumbar 1–5 (L1–L5), or sacral (S1–S5). A person's level of injury is defined as the lowest level of full sensation and function. Paraplegia occurs when the legs are affected by the spinal cord damage (in thoracic, lumbar, or sacral injuries), and tetraplegia occurs when all four limbs are affected (cervical damage). SCI is also classified by the degree of impairment. The International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI), published by the American Spinal Injury Association (ASIA), is widely used to document sensory and motor impairments following SCI. It is based on neurological responses, touch and pinprick sensations tested in each dermatome, and strength of the muscles that control key motions on both sides of the body. Muscle strength is scored on a scale of 0–5 according to the table above, and sensation is graded on a scale of 0–2: 0 is no sensation, 1 is altered or decreased sensation, and 2 is full sensation. Each side of the body is graded independently. In a "complete" spinal injury, all functions below the injured area are lost, whether or not the spinal cord is severed. An "incomplete" spinal cord injury involves preservation of motor or sensory function below the level of injury in the spinal cord. To be classed as incomplete, there must be some preservation of sensation or motion in the areas innervated by S4 to S5, including voluntary external anal sphincter contraction. The nerves in this area are connected to the very lowest region of the spinal cord, and retaining sensation and function in these parts of the body indicates that the spinal cord is only partially damaged. Incomplete injury by definition includes a phenomenon known as sacral sparing: some degree of sensation is preserved in the sacral dermatomes, even though sensation may be more impaired in other, higher dermatomes below the level of the lesion. Sacral sparing has been attributed to the fact that the sacral spinal pathways are not as likely as the other spinal pathways to become compressed after injury due to the lamination of fibers within the spinal cord. Spinal cord injury without radiographic abnormality exists when spinal cord injury is present but there is no evidence of spinal column injury on radiographs. Spinal column injury is trauma that causes fracture of the bone or instability of the ligaments in the spine; this can coexist with or cause injury to the spinal cord, but each injury can occur without the other. Abnormalities might show up on magnetic resonance imaging (MRI), but the term was coined before MRI was in common use. Central cord syndrome, almost always resulting from damage to the cervical spinal cord, is characterized by weakness in the arms with relative sparing of the legs, and spared sensation in regions served by the sacral segments. There is loss of sensation of pain, temperature, light touch, and pressure below the level of injury. The spinal tracts that serve the arms are more affected due to their central location in the spinal cord, while the corticospinal fibers destined for the legs are spared due to their more external location. The most common of the incomplete SCI syndromes, central cord syndrome usually results from neck hyperextension in older people with spinal stenosis. In younger people, it most commonly results from neck flexion. The most common causes are falls and vehicle accidents; however other possible causes include spinal stenosis and impingement on the spinal cord by a tumor or intervertebral disc. Anterior spinal artery syndrome also known as anterior spinal cord syndrome, due to damage to the front portion of the spinal cord or reduction in the blood supply from the anterior spinal artery, can be caused by fractures or dislocations of vertebrae or herniated disks. Below the level of injury, motor function, pain sensation, and temperature sensation are lost, while sense of touch and proprioception (sense of position in space) remain intact. These differences are due to the relative locations of the spinal tracts responsible for each type of function. Brown-Séquard syndrome occurs when the spinal cord is injured on one side much more than the other. It is rare for the spinal cord to be truly hemisected (severed on one side), but partial lesions due to penetrating wounds (such as gunshot or knife wounds) or fractured vertebrae or tumors are common. On the ipsilateral side of the injury (same side), the body loses motor function, proprioception, and senses of vibration and touch. On the contralateral (opposite side) of the injury, there is a loss of pain and temperature sensations. If the injury is above pyramidal decussation there is contralateral hemiplegia, at the level of decussation there is completed motor loss on both sides and below pyramidal decussation there is ipsilateral hemiplegia. Spinothalamic tracts are in charge for pain and temperature sensation and because these tracts cross to the opposite side and above the spinal cord there is loss on the contralateral side. Posterior spinal artery syndrome (PSAS), in which just the dorsal columns of the spinal cord are affected, is usually seen in cases of chronic myelopathy but can also occur with infarction of the posterior spinal artery. This rare syndrome causes the loss of proprioception and sense of vibration below the level of injury while motor function and sensation of pain, temperature, and touch remain intact. Usually posterior cord injuries result from insults like disease or vitamin deficiency rather than trauma. Tabes dorsalis, due to injury to the posterior part of the spinal cord caused by syphilis, results in loss of touch and proprioceptive sensation. Conus medullaris syndrome is an injury to the end of the spinal cord the conus medullaris, located at about the T12–L2 vertebrae in adults. This region contains the S4–S5 spinal segments, responsible for bowel, bladder, and some sexual functions, so these can be disrupted in this type of injury. In addition, sensation and the Achilles reflex can be disrupted. Causes include tumors, physical trauma, and ischemia. Cauda equina syndrome may also be caused by central disc prolapse or slipped disc, infections such as epidural abscess, spinal haemorrhages, secondary to medical procedures and birth abnormalities. Cauda equina syndrome (CES) results from a lesion below the level at which the spinal cord ends. Descending nerve roots continue as the cauda equina at levels L2–S5 below the conus medullaris before exiting through intervertebral foraminae. Thus it is not a true spinal cord syndrome since it is nerve roots that are damaged and not the cord itself; however, it is common for several of these nerves to be damaged at the same time due to their proximity. CES can occur by itself or alongside conus medullaris syndrome. It can cause low back pain, weakness or paralysis in the lower limbs, loss of sensation, bowel and bladder dysfunction, and loss of reflexes. There may be bilateral sciatica with central disc prolapse and altered gait. Unlike conus medullaris syndrome, symptoms often occur only on one side of the body. The cause is often compression, e.g. by a ruptured intervertebral disk or tumor. Since the nerves damaged in CES are actually peripheral nerves because they have already branched off from the spinal cord, the injury has better prognosis for recovery of function: the peripheral nervous system has a greater capacity for healing than the central nervous system. Signs and symptoms Signs (observed by a clinician) and symptoms (experienced by a patient) vary depending on where the spine is injured and the extent of the injury. A section of skin innervated through a specific part of the spine is called a dermatome, and injury to that part of the spine can cause pain, numbness, or a loss of sensation in the related areas. Paraesthesia, a tingling or burning sensation in affected areas of the skin, is another symptom. A person with a lowered level of consciousness may show a response to a painful stimulus above a certain point but not below it. A group of muscles innervated through a specific part of the spine is called a myotome, and injury to that part of the spinal cord can cause problems with movements that involve those muscles. The muscles may contract uncontrollably (spasticity), become weak, or be completely paralysed. Spinal shock, loss of neural activity including reflexes below the level of injury, occurs shortly after the injury and usually goes away within a day. Priapism, an erection of the penis may be a sign of acute spinal cord injury. The specific parts of the body affected by loss of function are determined by the level of injury. Some signs, such as bowel and bladder dysfunction can occur at any level. Neurogenic bladder involves a compromised ability to empty the bladder and is a common symptom of spinal cord injury. This can lead to high pressures in the bladder that can damage the kidneys. Spinal cord injury locations Spinal cord injuries at the cervical vertebrae (neck) level result in full or partial tetraplegia, also called quadriplegia. Depending on the specific location and severity of trauma, limited function may be retained. Additional symptoms of cervical injuries include low heart rate, low blood pressure, problems regulating body temperature, and breathing dysfunction. If the injury is high enough in the neck to impair the muscles involved in breathing, the person may not be able to breathe without the help of an endotracheal tube and mechanical ventilator. The effects of injuries at or above the lumbar or sacral regions of the spinal cord (lower back and pelvis) include decreased control of the legs and hips, genitourinary system, and anus. People injured below level L2 may still have use of their hip flexor and knee extensor muscles. Bowel and bladder function are regulated by the sacral region. It is common to experience sexual dysfunction after injury, as well as dysfunction of the bowel and bladder, including fecal and urinary incontinence. In addition to the problems found in lower-level injuries, thorax (chest height) spinal lesions can affect the muscles in the trunk. Injuries at the level of T1 to T8 result in inability to control the abdominal muscles. Trunk stability may be affected; even more so in higher level injuries. The lower the level of injury, the less extensive its effects. Injuries from T9 to T12 result in partial loss of trunk and abdominal muscle control. Thoracic spinal injuries result in paraplegia, but function of the hands, arms, and neck are not affected. One condition that occurs typically in lesions above the T6 level is autonomic dysreflexia (AD), in which the blood pressure increases to dangerous levels, high enough to cause potentially deadly stroke. It results from an overreaction of the system to a stimulus such as pain below the level of injury, because inhibitory signals from the brain cannot pass the lesion to dampen the excitatory sympathetic nervous system response. Signs and symptoms of AD include anxiety, headache, nausea, ringing in the ears, blurred vision, flushed skin, and nasal congestion. It can occur shortly after the injury or not until years later. Other autonomic functions may also be disrupted. For example, problems with body temperature regulation mostly occur in injuries at T8 and above. Another serious complication that can result from lesions above T6 is neurogenic shock, which results from an interruption in output from the sympathetic nervous system responsible for maintaining muscle tone in the blood vessels. Without the sympathetic input, the vessels relax and dilate. Neurogenic shock presents with dangerously low blood pressure, low heart rate, and blood pooling in the limbs—which results in insufficient blood flow to the spinal cord and potentially further damage to it. Complications of spinal cord injuries include pulmonary edema, respiratory failure, neurogenic shock, and paralysis below the injury site. In the long term, the loss of muscle function can have additional effects from disuse, including muscle atrophy. Immobility also can lead to pressure sores, particularly in bony areas, requiring precautions such as extra cushioning and turning in bed every two hours (in the acute setting) to relieve pressure. In the long term, people in wheelchairs must shift periodically to relieve pressure. Another complication is pain, including nociceptive pain (indication of potential or actual tissue damage) and neuropathic pain, when nerves affected by damage convey erroneous pain signals in the absence of noxious stimuli. Spasticity, the uncontrollable tensing of muscles below the level of injury, occurs in 65–78% of chronic SCI. It results from lack of input from the brain that quells muscle responses to stretch reflexes. It can be treated with drugs and physical therapy. Spasticity increases the risk of contractures (shortening of muscles, tendons, or ligaments that result from lack of use of a limb); this problem can be prevented by moving the limb through its full range of motion multiple times a day. Another problem lack of mobility can cause is loss of bone density and changes in bone structure. Loss of bone density (bone demineralization), thought to be due to lack of input from weakened or paralysed muscles, can increase the risk of fractures. Conversely, a poorly understood phenomenon is the overgrowth of bone tissue in soft tissue areas, called heterotopic ossification. It occurs below the level of injury, possibly as a result of inflammation, and happens to a clinically significant extent in 27% of people. People with spinal cord injury are at especially high risk for respiratory and cardiovascular problems, so hospital staff must be watchful to avoid them. Respiratory problems (especially pneumonia) are the leading cause of death in people with SCI, followed by infections, usually of pressure sores, urinary tract infections, and respiratory infections. Pneumonia can be accompanied by shortness of breath, fever, and anxiety. Another potentially deadly threat to respiration is deep venous thrombosis (DVT), in which blood forms a clot in immobile limbs; the clot can break off and form a pulmonary embolism, lodging in the lung and cutting off blood supply to it. DVT is an especially high risk in SCI, particularly within 10 days of injury, occurring in over 13% in the acute care setting. Preventative measures include anticoagulants, pressure hose, and moving the patient's limbs. The usual signs and symptoms of DVT and pulmonary embolism may be masked in SCI cases due to effects such as alterations in pain perception and nervous system functioning. Urinary tract infection (UTI) is another risk that may not display the usual symptoms (pain, urgency, and frequency); it may instead be associated with worsened spasticity. The risk of UTI, likely the most common complication in the long term, is heightened by use of indwelling urinary catheters. Catheterization may be necessary because SCI interferes with the bladder's ability to empty when it gets too full, which could trigger autonomic dysreflexia or damage the bladder permanently. The use of intermittent catheterization to empty the bladder at regular intervals throughout the day has decreased the mortality due to kidney failure from UTI in the first world, but it is still a serious problem in developing countries. An estimated 24–45% of people with spinal cord injuries have major depressive disorder, and the suicide rate is as much as six times that of the rest of the population. The risk of suicide is worst in the first five years after injury. In young people with SCI, suicide is the leading cause of death. Depression is associated with an increased risk of other complications such as UTI and pressure ulcers that occur more when self-care is neglected. Causes Spinal cord injuries are most often caused by physical trauma. Forces involved can be hyperflexion (forward movement of the head); hyperextension (backward movement); lateral stress (sideways movement); rotation (twisting of the head); compression (force along the axis of the spine downward from the head or upward from the pelvis); or distraction (pulling apart of the vertebrae). Traumatic SCI can result in contusion, compression, or stretch injury. It is a major risk of many types of vertebral fracture. Pre-existing asymptomatic congenital anomalies can cause major neurological deficits, such as hemiparesis, to result from otherwise minor trauma. In the U.S., motor vehicle accidents are the most common cause of SCIs; second are falls, then violence such as gunshot wounds, then sports injuries. Another study from Asia, found that the most common cause of the SCI is fall (31.70%) from various sites such as fall from roof-tops (9.75%), electric pole (7.31%), fall from tree (7.31%) etc. Whereas road traffic accidents count for 19.51%, firearm injuries (12.19%), slipped foot (7.31%) and sports injuries (4.87%). As a result of injury, 26.82%In some countries falls are more common, even surpassing vehicle crashes as the leading cause of SCI. The rates of violence-related SCI depend heavily on place and time. Of all sports-related SCIs, shallow water dives are the most common cause; winter sports and water sports have been increasing as causes while association football and trampoline injuries have been declining. Hanging can cause injury to the cervical spine, as may occur in attempted suicide. Military conflicts are another cause, and when they occur they are associated with increased rates of SCI. Another potential cause of SCI is iatrogenic injury, caused by an improperly done medical procedure such as an injection into the spinal column. SCI can also be of a nontraumatic origin. The percentage varies by locale, influenced by efforts to prevent trauma. Developed countries have higher percentages of SCI due to degenerative conditions and tumors than developing countries. In developed countries, the most common cause of nontraumatic SCI is degenerative diseases, followed by tumors; in many developing countries the leading cause is infection such as HIV and tuberculosis. SCI may occur in intervertebral disc disease, and spinal cord vascular disease. Spontaneous bleeding can occur within or outside of the protective membranes that line the cord, and intervertebral disks can herniate. Damage can result from dysfunction of the blood vessels, as in arteriovenous malformation, or when a blood clot becomes lodged in a blood vessel and cuts off blood supply to the cord. When systemic blood pressure drops, blood flow to the spinal cord may be reduced, potentially causing a loss of sensation and voluntary movement in the areas supplied by the affected level of the spinal cord. Congenital conditions and tumors that compress the cord can also cause SCI, as can vertebral spondylosis and ischemia. Multiple sclerosis is a disease that can damage the spinal cord, as can infectious or inflammatory conditions such as tuberculosis, herpes zoster or herpes simplex, meningitis, myelitis, and syphilis. Prevention Vehicle-related spinal cord injury is prevented with measures including societal and individual efforts to reduce driving under the influence of drugs or alcohol, distracted driving, and drowsy driving. Other efforts include increasing road safety (such as marking hazards and adding lighting) and vehicle safety, both to prevent accidents, such as routine maintenance and antilock brakes. There are also approaches mitigate the damage of crashes, such as head restraints, air bags, seat belts, and child safety seats. Falls can be prevented by making changes to the environment, such as nonslip materials and grab bars in bathtubs and showers, railings for stairs, child and safety gates for windows. Gun-related injuries can be prevented with conflict resolution training, gun safety education campaigns, and changes to the technology of guns, including trigger locks to improve their safety. Sports injuries can be prevented with changes to sports rules and equipment to increase safety, and education campaigns to reduce risky practices such as diving into water of unknown depth or head-first tackling in association football. Diagnosis A person's presentation in context of trauma or non-traumatic background determines suspicion for a spinal cord injury. The features are namely paralysis, sensory loss, or both at any level. Other symptoms may include incontinence. A radiographic evaluation using an X-ray, CT scan, or MRI can determine if there is damage to the spinal column and where it is located. X-rays are commonly available and can detect instability or misalignment of the spinal column, but do not give very detailed images and can miss injuries to the spinal cord or displacement of ligaments or disks that do not have accompanying spinal column damage. Thus when X-ray findings are normal but SCI is still suspected due to pain or SCI symptoms, CT or MRI scans are used. CT gives greater detail than X-rays, but exposes the patient to more radiation, and it still does not give images of the spinal cord or ligaments; MRI shows body structures in the greatest detail. Thus it is the standard for anyone who has neurological deficits found in SCI or is thought to have an unstable spinal column injury. Neurological evaluations to help determine the degree of impairment are performed initially and repeatedly in the early stages of treatment; this determines the rate of improvement or deterioration and informs treatment and prognosis. The ASIA Impairment Scale outlined above is used to determine the level and severity of injury. Management The first stage in the management of a suspected spinal cord injury is geared toward basic life support and preventing further injury: maintaining airway, breathing, circulation, and restricting further motion of the spine. In the emergency setting, most people who have been subjected to forces strong enough to cause SCI are treated as though they have instability in the spinal column and have spinal motion restricted to prevent damage to the spinal cord. Injuries or fractures in the head, neck, or pelvis as well as penetrating trauma near the spine and falls from heights are assumed to be associated with an unstable spinal column until it is ruled out in the hospital. High-speed vehicle crashes, sports injuries involving the head or neck, and diving injuries are other mechanisms that indicate a high SCI risk. Since head and spinal trauma frequently coexist, anyone who is unconscious or has a lowered level of consciousness as a result of a head injury is spinal motion restricted. A rigid cervical collar is applied to the neck, and the head is held with blocks on either side and the person is strapped to a backboard. Extrication devices are used to move people without excessively moving the spine if they are still inside a vehicle or other confined space. The use of a cervical collar has been shown to increase mortality in people with penetrating trauma and is thus not routinely recommended in this group. Modern trauma care includes a step called clearing the cervical spine, ruling out spinal cord injury if the patient is fully conscious and not under the influence of drugs or alcohol, displays no neurological deficits, has no pain in the middle of the neck and no other painful injuries that could distract from neck pain. If these are all absent, no spinal motion restriction is necessary. If an unstable spinal column injury is moved, damage may occur to the spinal cord. Between 3 and 25% of SCIs occur not at the time of the initial trauma but later during treatment or transport. While some of this is due to the nature of the injury itself, particularly in the case of multiple or massive trauma, some of it reflects the failure to adequately restrict motion of the spine. SCI can impair the body's ability to keep warm, so warming blankets may be needed. Initial care in the hospital, as in the prehospital setting, aims to ensure adequate airway, breathing, cardiovascular function, and spinal motion restriction. Imaging of the spine to determine the presence of a SCI may need to wait if emergency surgery is needed to stabilize other life-threatening injuries. Acute SCI merits treatment in an intensive care unit, especially injuries to the cervical spinal cord. People with SCI need repeated neurological assessments and treatment by neurosurgeons. People should be removed from the spine board as rapidly as possible to prevent complications from its use. If the systolic blood pressure falls below 90 mmHg within days of the injury, blood supply to the spinal cord may be reduced, resulting in further damage. Thus it is important to maintain the blood pressure which may be done using intravenous fluids and vasopressors. Vasopressors used include phenylephrine, dopamine, or norepinephrine. Mean arterial blood pressure is measured and kept at 85 to 90 mmHg for seven days after injury. The CAMPER Trial led by Dr Kwon and subsequent studies by the UCSF TRACK-SCI group (Dhall) have shown that spinal cord perfusion pressure (SCPP) goals are more closely associated with better neurologic recovery than MAP goals. Some institutions have adopted these SCPP goals and lumbar CSF drain placement as a standard of care. The treatment for shock from blood loss is different from that for neurogenic shock, and could harm people with the latter type, so it is necessary to determine why someone is in shock. However it is also possible for both causes to exist at the same time. Another important aspect of care is prevention of insufficient oxygen in the bloodstream, which could deprive the spinal cord of oxygen. People with cervical or high thoracic injuries may experience a dangerously slowed heart rate; treatment to speed it may include atropine. The corticosteroid medication methylprednisolone has been studied for use in spinal cord injury patients with the hope of limiting swelling and secondary injury. As there does not appear to be long term benefits and the medication is associated with risks such as gastrointestinal bleeding and infection its use is not recommended as of 2018. Its use in traumatic brain injury is also not recommended. Surgery may be necessary, e.g. to relieve excess pressure on the cord, to stabilize the spine, or to put vertebrae back in their proper place. In cases involving instability or compression, failing to operate can lead to worsening of the condition. Surgery is also necessary when something is pressing on the cord, such as bone fragments, blood, material from ligaments or intervertebral discs, or a lodged object from a penetrating injury. Although the ideal timing of surgery is still debated, studies have found that earlier surgical intervention (within 12 hours of injury) is associated with better outcomes. This type of surgery is often referred to as "Ultra-Early", coined by Burke et al. at UCSF. Sometimes a patient has too many other injuries to be a surgical candidate this early. Surgery is controversial because it has potential complications (such as infection), so in cases where it is not clearly needed (e.g. the cord is being compressed), doctors must decide whether to perform surgery based on aspects of the patient's condition and their own beliefs about its risks and benefits. Recent large-scale studies have shown that patients who do undergo earlier surgery (within 12–24 hours) experience significantly lower rates of life-threatening complications and spend less time in hospital and critical care. However, in cases where a more conservative approach is chosen, bed rest, cervical collars, motion restriction devices, and optionally traction are used. Surgeons may opt to put traction on the spine to remove pressure from the spinal cord by putting dislocated vertebrae back into alignment, but herniation of intervertebral disks may prevent this technique from relieving pressure. Gardner-Wells tongs are one tool used to exert spinal traction to reduce a fracture or dislocation and to reduce motion to the affected areas. Spinal cord injury patients often require extended treatment in specialized spinal unit or an intensive care unit. The rehabilitation process typically begins in the acute care setting. Usually, the inpatient phase lasts 8–12 weeks and then the outpatient rehabilitation phase lasts 3–12 months after that, followed by yearly medical and functional evaluation. Physical therapists, occupational therapists, recreational therapists, nurses, social workers, psychologists, and other health care professionals work as a team under the coordination of a physiatrist to decide on goals with the patient and develop a plan of discharge that is appropriate for the person's condition. In the acute phase physical therapists focus on the patient's respiratory status, prevention of indirect complications (such as pressure ulcers), maintaining range of motion, and keeping available musculature active. For people whose injuries are high enough to interfere with breathing, there is great emphasis on airway clearance during this stage of recovery. Weakness of respiratory muscles impairs the ability to cough effectively, allowing secretions to accumulate within the lungs. As SCI patients have reduced total lung capacity and tidal volume, physical therapists teach them accessory breathing techniques (e.g. apical breathing, glossopharyngeal breathing) that typically are not taught to healthy individuals. Physical therapy treatment for airway clearance may include manual percussions and vibrations, postural drainage, respiratory muscle training, and assisted cough techniques. Patients are taught to increase their intra-abdominal pressure by leaning forward to induce cough and clear mild secretions. The quad cough technique is done lying on the back with the therapist applying pressure on the abdomen in the rhythm of the cough to maximize expiratory flow and mobilize secretions. Manual abdominal compression is another technique used to increase expiratory flow which later improves coughing. Other techniques used to manage respiratory dysfunction include respiratory muscle pacing, use of a constricting abdominal binder, ventilator-assisted speech, and mechanical ventilation. The amount of functional recovery and independence achieved in terms of activities of daily living, recreational activities, and employment is affected by the level and severity of injury. The Functional Independence Measure (FIM) is an assessment tool that aims to evaluate the function of patients throughout the rehabilitation process following a spinal cord injury or other serious illness or injury. It can track a patient's progress and degree of independence during rehabilitation. People with SCI may need to use specialized devices and to make modifications to their environment in order to handle activities of daily living and to function independently. Weak joints can be stabilized with devices such as ankle-foot orthoses (AFOs) or knee-ankle-foot orthoses (KAFOs), but walking may still require a lot of effort. Increasing activity will increase chances of recovery. For treatment of paralysis levels in the lower thoracic spine or lower, starting therapy with an orthosis is promising from the intermediate phase (2–26 weeks after the incident). In patients with complete paraplegia (ASIA A), this applies to lesion heights between T12 and S5. In patients with incomplete paraplegia (ASIA B-D), orthoses are even suitable for lesion heights above T12. In both cases, however, a detailed muscle function test must be carried out to precisely plan the construction with an orthosis. Prognosis Spinal cord injuries generally result in at least some incurable impairment even with the best possible treatment. The best predictor of prognosis is the level and completeness of injury, as measured by the ASIA impairment scale. The neurological score at the initial evaluation done 72 hours after injury is the best predictor of how much function will return. Most people with ASIA scores of A (complete injuries) do not have functional motor recovery, but improvement can occur. Most patients with incomplete injuries recover at least some function. Chances of recovering the ability to walk improve with each AIS grade found at the initial examination; e.g. an ASIA D score confers a better chance of walking than a score of C. The symptoms of incomplete injuries can vary and it is difficult to make an accurate prediction of the outcome. A person with a mild, incomplete injury at the T5 vertebra will have a much better chance of using his or her legs than a person with a severe, complete injury at exactly the same place. Of the incomplete SCI syndromes, Brown-Séquard and central cord syndromes have the best prognosis for recovery and anterior cord syndrome has the worst. People with nontraumatic causes of SCI have been found to be less likely to develop complete injuries and some complications such as pressure sores and deep vein thrombosis, and to have shorter hospital stays. Their scores on functional tests were better than those of people with traumatic SCI upon hospital admission, but when they were tested upon discharge, those with traumatic SCI had improved such that both groups' results were the same. In addition to the completeness and level of the injury, age and concurrent health problems affect the extent to which a person with SCI will be able to live independently and to walk. However, in general people with injuries to L3 or below will likely be able to walk functionally, T10 and below to walk around the house with bracing, and C7 and below to live independently. New therapies are beginning to provide hope for better outcomes in patients with SCI, but most are in the experimental/translational stage. One important predictor of motor recovery in an area is presence of sensation there, particularly pain perception. Most motor recovery occurs in the first year post-injury, but modest improvements can continue for years; sensory recovery is more limited. Recovery is typically quickest during the first six months. Spinal shock, in which reflexes are suppressed, occurs immediately after the injury and resolves largely within three months but continues resolving gradually for another 15. Sexual dysfunction after spinal injury is common. Problems that can occur include erectile dysfunction, loss of ability to ejaculate, insufficient lubrication of the vagina, and reduced sensation and impaired ability to orgasm. Despite this, many people learn ways to adapt their sexual practices so they can lead satisfying sex lives. Although life expectancy has improved with better care options, it is still not as good as the uninjured population. The higher the level of injury, and the more complete the injury, the greater the reduction in life expectancy. Mortality is very elevated within a year of injury. Epidemiology Worldwide, the number of new cases since 1995 of SCI ranges from 10.4 to 83 people per million per year. This wide range of numbers is probably partly due to differences among regions in whether and how injuries are reported. In North America, about 39 people per every million incur SCI traumatically each year, and in Western Europe, the incidence is 16 per million. In the United States, the incidence of spinal cord injury has been estimated to be about 40 cases per 1 million people per year or around 12,000 cases per year. In China, the incidence is approximately 60,000 per year. The estimated number of people living with SCI in the world ranges from 236 to 4187 per million. Estimates vary widely due to differences in how data are collected and what techniques are used to extrapolate the figures. Little information is available from Asia, and even less from Africa and South America. In Western Europe the estimated prevalence is 300 per million people and in North America it is 853 per million. It is estimated at 440 per million in Iran, 526 per million in Iceland, and 681 per million in Australia. In the United States there are between 225,000 and 296,000 individuals living with spinal cord injuries, and different studies have estimated prevalences from 525 to 906 per million. SCI is present in about 2% of all cases of blunt force trauma. Anyone who has undergone force sufficient to cause a thoracic spinal injury is at high risk for other injuries also. In 44% of SCI cases, other serious injuries are sustained at the same time; 14% of SCI patients also have head trauma or facial trauma. Other commonly associated injuries include chest trauma, abdominal trauma, pelvic fractures, and long bone fractures. Males account for four out of five traumatic spinal cord injuries. Most of these injuries occur in men under 30 years of age. The average age at the time of injury has slowly increased from about 29 years in the 1970s to 41. In Pakistan, spinal cord injury is more common in males (92.68%) as compared to females in the 20–30 years of age group with a median age of 40 years, although people from 12 to 70 years of age suffered from spinal cord injury Rates of injury are at their lowest in children, at their highest in the late teens to early twenties, then get progressively lower in older age groups; however rates may rise in the elderly. In Sweden between 50 and 70% of all cases of SCI occur in people under 30, and 25% occur in those over 50. While SCI rates are highest among people age 15–20, fewer than 3% of SCIs occur in people under 15. Neonatal SCI occurs in one in 60,000 births, e.g. from breech births or injuries by forceps. The difference in rates between the sexes diminishes in injuries at age 3 and younger; the same number of girls are injured as boys, or possibly more. Another cause of pediatric injury is child abuse such as shaken baby syndrome. For children, the most common cause of SCI (56%) is vehicle crashes. High numbers of adolescent injuries are attributable in a large part to traffic accidents and sports injuries. For people over 65, falls are the most common cause of traumatic SCI. The elderly and people with severe arthritis are at high risk for SCI because of defects in the spinal column. In nontraumatic SCI, the gender difference is smaller, the average age of occurrence is greater, and incomplete lesions are more common. History Spinal cord injury has been known to be devastating for millennia; the ancient Egyptian Edwin Smith Papyrus from 2500 BC, the first known description of the injury, says it is "not to be treated". Hindu texts dating back to 1800 BC also mention SCI and describe traction techniques to straighten the spine. The Greek physician Hippocrates, born in the fifth century BC, described SCI in his Hippocratic Corpus and invented traction devices to straighten dislocated vertebrae. But it was not until Aulus Cornelius Celsus, born 30 BC, noted that a cervical injury resulted in rapid death that the spinal cord itself was implicated in the condition. In the second century AD the Greek physician Galen experimented on monkeys and reported that a horizontal cut through the spinal cord caused them to lose all sensation and motion below the level of the cut. The seventh-century Greek physician Paul of Aegina described surgical techniques for treatment of broken vertebrae by removing bone fragments, as well as surgery to relieve pressure on the spine. Little medical progress was made during the Middle Ages in Europe; it was not until the Renaissance that the spine and nerves were accurately depicted in human anatomy drawings by Leonardo da Vinci and Andreas Vesalius. In 1762, Andre Louis, a surgeon, removed a bullet from the lumbar spine of a patient, who regained motion in the legs. In 1829, Gilpin Smith, a surgeon, performed a successful laminectomy that improved the patient's sensation. However, the idea that SCI was untreatable remained predominant until the early 20th century. In 1934, the mortality rate in the first two years after injury was over 80%, mostly due to infections of the urinary tract and pressure sores, the latter of which were believed to be intrinsic to SCI rather than a result of continuous bedrest. It was not until the second half of the century that breakthroughs in imaging, surgery, medical care, and rehabilitation medicine contributed to a substantial improvement in SCI care. The relative incidence of incomplete compared to complete injuries has improved since the mid-20th century, due mainly to the emphasis on faster and better initial care and stabilization of spinal cord injury patients. The creation of emergency medical services to professionally transport people to the hospital is given partial credit for an improvement in outcomes since the 1970s. Improvements in care have been accompanied by increased life expectancy of people with SCI; survival times have improved by about 2000% since 1940. In 2015/2016 23% of people in nine spinal injury centres in England had their discharge delayed because of disputes about who should pay for the equipment they needed. Research directions Scientists are investigating various avenues for treatment of spinal cord injury. Therapeutic research is focused on two main areas: neuroprotection and neuroregeneration. The former seeks to prevent the harm that occurs from secondary injury in the minutes to weeks following the insult, and the latter aims to reconnect the broken circuits in the spinal cord to allow function to return. Neuroprotective drugs target secondary injury effects including inflammation, damage by free radicals, excitotoxicity (neuronal damage by excessive glutamate signaling), and apoptosis (cell suicide). Several potentially neuroprotective agents that target pathways like these are under investigation in human clinical trials. Stem cell transplantation is an important avenue for SCI research: the goal is to replace lost spinal cord cells, allow reconnection in broken neural circuits by regrowing axons, and to create an environment in the tissues that is favorable to growth. A key avenue of SCI research is research on stem cells, which can differentiate into other types of cells—including those lost after SCI. Types of cells being researched for use in SCI include embryonic stem cells, neural stem cells, mesenchymal stem cells, olfactory ensheathing cells, Schwann cells, activated macrophages, and induced pluripotent stem cells. Hundreds of stem cell studies have been done in humans, with promising but inconclusive results. An ongoing Phase 2 trial in 2016 presented data showing that after 90 days, 2 out of 4 subjects had already improved two motor levels and had thus already achieved its endpoint of 2/5 patients improving two levels within 6–12 months. Six-month data was expected in January 2017. Another type of approach is tissue engineering, using biomaterials to help scaffold and rebuild damaged tissues. Biomaterials being investigated include natural substances such as collagen or agarose and synthetic ones like polymers and nitrocellulose. They fall into two categories: hydrogels and nanofibers. These materials can also be used as a vehicle for delivering gene therapy to tissues. One avenue being explored to allow paralyzed people to walk and to aid in rehabilitation of those with some walking ability is the use of wearable powered robotic exoskeletons. The devices, which have motorized joints, are put on over the legs and supply a source of power to move and walk. Several such devices are already available for sale, but investigation is still underway as to how they can be made more useful. Preliminary studies of epidural spinal cord stimulators for motor complete injuries have demonstrated some improvement, and in some cases to enable walking to some degree bypassing the injury. In 2014, Darek Fidyka underwent pioneering spinal surgery that used nerve grafts, from his ankle, to bridge the gap in his severed spinal cord and olfactory ensheathing cells (OECs) to stimulate the spinal cord cells. The surgery was performed in Poland in collaboration with Prof. Geoff Raisman, chair of neural regeneration at University College London's Institute of Neurology, and his research team. The OECs were taken from the patient's olfactory bulbs in his brain and then grown in the lab, these cells were then injected above and below the impaired spinal tissue. There have been a number of advances in technological spinal cord injury treatment, including the use of implants that provided a "digital bridge" between the brain and the spinal cord. In a study published in May 2023 in the journal Nature, researchers in Switzerland described such implants which allowed a 40-year-old man, paralyzed from the hips down for 12 years, to stand, walk and ascend a steep ramp with only the assistance of a walker. More than a year after the implant was inserted, he has retained these abilities and was walking with crutches even when the implant was switched off. In March 2025, researchers reported that a paralyzed man stood for the first time after being injected of neural stem cells to treat his spinal cord injury. The first-of-its-kind study, which is not yet peer-reviewed, is encouraging scientists to consider if reprogrammed stem cells can be used in the future to treat people who are fully paralyzed. Reprogrammed cells are adult cells that are reverted to an embryonic-like state, from which they can be coaxed to develop into other cell types. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Vertex-transitive_graph] | [TOKENS: 519] |
Contents Vertex-transitive graph In the mathematical field of graph theory, an automorphism is a permutation of the vertices such that edges are mapped to edges and non-edges are mapped to non-edges. A graph is a vertex-transitive graph if, given any two vertices v1 and v2 of G, there is an automorphism f such that In other words, a graph is vertex-transitive if its automorphism group acts transitively on its vertices. A graph is vertex-transitive if and only if its graph complement is, since the group actions are identical. Every symmetric graph without isolated vertices is vertex-transitive, and every vertex-transitive graph is regular. However, not all vertex-transitive graphs are symmetric (for example, the edges of the truncated tetrahedron), and not all regular graphs are vertex-transitive (for example, the Frucht graph and Tietze's graph). Finite examples Finite vertex-transitive graphs include the symmetric graphs (such as the Petersen graph, the Heawood graph and the vertices and edges of the Platonic solids). The finite Cayley graphs (such as cube-connected cycles) are also vertex-transitive, as are the vertices and edges of the Archimedean solids (though only two of these are symmetric). Potočnik, Spiga and Verret have constructed a census of all connected cubic vertex-transitive graphs on at most 1280 vertices. Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs. The most famous example is the Petersen graph, but others can be constructed including the line graphs of edge-transitive non-bipartite graphs with odd vertex degrees. Properties The edge-connectivity of a connected vertex-transitive graph is equal to the degree d, while the vertex-connectivity will be at least 2(d + 1)/3. If the degree is 4 or less, or the graph is also edge-transitive, or the graph is a minimal Cayley graph, then the vertex-connectivity will also be equal to d. Infinite examples Infinite vertex-transitive graphs include: Two countable vertex-transitive graphs are called quasi-isometric if the ratio of their distance functions is bounded from below and from above. A well known conjecture stated that every infinite vertex-transitive graph is quasi-isometric to a Cayley graph. A counterexample was proposed by Diestel and Leader in 2001. In 2005, Eskin, Fisher, and Whyte confirmed the counterexample. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Distance_education] | [TOKENS: 6902] |
Contents Distance education Distance education, also known as distance learning, is the education of students who may not always be physically present at school, or where the learner and the teacher are separated in both time and distance; today, it usually involves online education (also known as online learning, remote learning or remote education) through an online school. A distance learning program can either be completely online, or a combination of both online and traditional in-person (also known as, offline) classroom instruction (called hybrid or blended). Massive open online courses (MOOCs), offering large-scale interactive participation and open access through the World Wide Web or other network technologies, are recent educational modes in distance education. A number of other terms (distributed learning, e-learning, m-learning, virtual classroom, etc.) are used roughly synonymously with distance education. E-learning has shown to be a useful educational tool. E-learning should be an interactive process with multiple learning modes for all learners at various levels of learning. The distance learning environment is an exciting place to learn new things, collaborate with others, and retain self-discipline. Historically, it involved correspondence courses wherein the student corresponded with the school via mail, but with the evolution of different technologies it has evolved to include video conferencing, TV, and the Internet. History One of the earliest attempts at distance education was advertised in 1728. This was in the Boston Gazette for "Caleb Philipps, Teacher of the new method of Short Hand", who sought students who wanted to learn the skills through weekly mailed lessons. The first distance education course in the modern sense was provided by Sir Isaac Pitman in the 1840s who taught a system of shorthand by mailing texts transcribed into shorthand on postcards and receiving transcriptions from his students in return for correction. The element of student feedback was a crucial innovation in Pitman's system. The postage stamp made this scheme for remote education possible, and these efforts were scalable because of the introduction of uniform postage rates across England in 1840. This early beginning proved extremely successful and the Phonographic Correspondence Society was founded three years later to establish these courses on a more formal basis. The society paved the way for the later formation of Sir Isaac Pitman Colleges across the country. The first correspondence school in the United States was the Society to Encourage Studies at Home which was founded in 1873. Founded in 1894, Wolsey Hall, Oxford was the first distance-learning college in the UK. The University of London was the first university to offer degrees to anyone who could pass their examinations, establishing its External Programme in 1858. It had been established in 1836 as an examining and degree-awarding body for affiliated colleges, originally University College London and King's College London but with many others added over the next two decades. The affiliated colleges provided certificates that the student had attended a course. A new charter in 1858 removed this requirement, allowing men (and women from 1878) taking instruction at any institution or pursuing a course of self-directed study to sit the examinations and receive degrees. The External Programme was referred to as the "People's University" by Charles Dickens as it provided access to higher education to students from less affluent backgrounds. Enrollment increased steadily during the late 19th century, and its example was widely copied elsewhere. However, the university only provided examinations, not instructional material, leading academics to state that "the original degree by external study of the UOL was not a form of distance education". The External Programme is now known as the University of London Worldwide, and includes postgraduate and undergraduate degrees created by member institutions of the University of London. The vast distances made Australia especially active; the University of Queensland established its Department of Correspondence Studies in 1911. William Rainey Harper, founder and first president of the University of Chicago, celebrated the concept of extended education, where a research university had satellite colleges elsewhere in the region. In 1892, Harper encouraged correspondence courses to further promote education, an idea that was put into practice by the University of Chicago, U. Wisconsin, Columbia U., and several dozen other universities by the 1920s. Enrollment in the largest private for-profit school based in Scranton, Pennsylvania, the International Correspondence Schools grew explosively in the 1890s. Founded in 1888 to provide training for immigrant coal miners aiming to become state mine inspectors or foremen, it enrolled 2500 new students in 1894 and matriculated 72,000 new students in 1895. By 1906 total enrollments reached 900,000. The growth was due to sending out complete textbooks instead of single lessons, and the use of 1200 aggressive in-person salesmen. There was a stark contrast in pedagogy: The regular technical school or college aims to educate a man broadly; our aim, on the contrary, is to educate him only along some particular line. The college demands that a student shall have certain educational qualifications to enter it and that all students study for approximately the same length of time; when they have finished their courses they are supposed to be qualified to enter any one of a number of branches in some particular profession. We, on the contrary, are aiming to make our courses fit the particular needs of the student who takes them. Education was a high priority in the Progressive Era, as American high schools and colleges expanded greatly. For men who were older or were too busy with family responsibilities, night schools were opened, such as the YMCA school in Boston that became Northeastern University. Private correspondence schools outside of the major cities provided a flexible, focused solution. Large corporations systematized their training programs for new employees. The National Association of Corporation Schools grew from 37 in 1913 to 146 in 1920. Private schools that provided specialized technical training to everyone who enrolled, not just employees of one company, began to open across the nation in the 1880s. Starting in Milwaukee in 1907, public schools began opening free vocational program. The International Conference for Correspondence Education held its first meeting in 1938. The goal was to provide individualized education for students, at low cost, by using a pedagogy of testing, recording, classification, and differentiation. Since then, the group has changed its name to the International Council for Open and Distance Education (ICDE), with its main office in Oslo, Norway. The Open University (OU) in the United Kingdom was founded by the then Labour government led by Harold Wilson. Based on the vision of Michael Young, planning commenced in 1965 under the Minister of State for Education, Jennie Lee, who established a model for the Open University as one of widening access to the highest standards of scholarship in higher education and setting up a planning committee consisting of university vice-chancellors, educationalists, and television broadcasters, chaired by Sir Peter Venables. The British Broadcasting Corporation's (BBC) Assistant Director of Engineering at the time, James Redmond, had obtained most of his qualifications at night school, and his natural enthusiasm for the project did much to overcome the technical difficulties of using television to broadcast teaching programs. The Open University revolutionized the scope of the correspondence program and helped to create a respectable learning alternative to the traditional form of education. It has been at the forefront of developing new technologies to improve distance learning service as well as undertaking research in other disciplines. Walter Perry was appointed the OU's first vice-chancellor in January 1969, and its foundation secretary was Anastasios Christodoulou. The election of the new Conservative government under the leadership of Edward Heath in 1970 led to budget cuts under Chancellor of the Exchequer Iain Macleod (who had earlier called the idea of an Open University "blithering nonsense"). However, the OU accepted its first 25,000 students in 1971, adopting a radical open admissions policy. At the time, the total student population of conventional universities in the United Kingdom was around 130,000. Athabasca University, Canada's open university, was created in 1970 and followed a similar, though independently developed, pattern. The Open University inspired the creation of Spain's National University of Distance Education (1972) and Germany's University of Hagen (1974). There are now many similar institutions around the world, often with the name "Open University", as in Italy (in English or in the local language). Most open universities use distance education technologies as delivery methods, though some require attendance at local study centers or at regional "summer schools". Some open universities have grown to become mega-universities. The COVID-19 pandemic resulted in the closure of the vast majority of schools worldwide for in-person learning. The pandemic also exposed gaps in teachers’ preparedness to use digital pedagogy effectively, including challenges with interactive instructional design and unfamiliarity with platforms such as Zoom and Teams. COVID-19 increased the value of distance education although its policies were implemented and formulated among several universities much earlier. Many schools moved to online remote learning through platforms including—but not limited to—Zoom, Blackboard, Cisco Webex, Google Meet, Microsoft Teams, Skype, D2L, GoTo Meeting and Edgenuity. A recent study showed that Google Classroom was the most used platform by students followed by Microsoft Teams and Zoom, respectively. The less-used platforms included Blackboard Learn, DingTalk, Tencent, and WhatsApp. However, the most preferred platforms by students were Microsoft Teams followed by Google Classroom and Zoom. Although Google Classroom was the most used by students as decided by their lectures, Microsoft Teams was the most preferred by those students. Concerns arose over the impact of this transition on students without access to an internet-enabled device or a stable internet connection. Distanced education during the COVID-19 pandemic has interrupted synchronous learning for many students and teachers; where educators were no longer able to teach in real-time and could only switch to asynchronous instruction, this significantly and negatively affected their coping with the transition, and posed various legal issues, especially in terms of copyright. The physical surroundings during the COVID-19 pandemic are seen by university instructors as having a detrimental effect on the quality of distance education. However, where the lecture is delivered and the type of faculty do not show any significant statistical variances in the quality of distance education. The shift away from real-time instruction to asynchronous learning modes has posed significant challenges, impacting both the teaching and learning experience. Educators, grappling with this abrupt transition, have faced hurdles in effectively engaging students and delivering course content, leading to heightened stress and burnout among faculty members. Additionally, this shift has raised legal concerns, particularly regarding copyright issues related to the dissemination of educational materials in digital formats. Post-COVID-19 pandemic, while some educational institutions went back to physical classes, others switched to blended learning or kept up their online distance learning. A recent study about the benefits and drawbacks of online learning found that students have had a harder time producing their own work. The study suggests teachers should cut back on the amount of information taught and incorporate more activities during the lesson, in order for students to create their own work. Though schools are slow to adapt to new technologies, COVID-19 required schools to adapt and learn how to use new digital and online learning tools. Web conferencing has become more popular since 2007. Researchers have found that people in online classes perform just as effectively as participants in conventional learning classes. The use of online learning is becoming a pathway for learners with sparse access to physical courses so they can complete their degrees. Furthermore, digital classroom technologies allow those living remotely to access learning, and it enables the student to fit learning into their schedule more easily. Technologies In synchronous learning, all participants are "present" at the same time in a virtual classroom, as in traditional classroom teaching. It requires a timetable. Web conferencing, videoconferencing, educational television, and instructional television are examples of synchronous technology, as are direct-broadcast satellite (DBS), internet radio, live streaming, telephone, and web-based VoIP. However, many learners face barriers due to lack of stable internet connections or access to devices, highlighting a serious equity issue in digital access. Web conferencing software helps to facilitate class meetings, and usually contains additional interaction tools such as text chat, polls, hand raising, emoticons etc. These tools also support asynchronous participation by students who can listen to recordings of synchronous sessions. Immersive environments (notably SecondLife) have also been used to enhance participant presence in distance education courses. Another form of synchronous learning using the classroom is the use of robot proxies including those that allow sick students to attend classes. Some universities have been starting to use robot proxies to enable more engaging synchronous hybrid classes where both remote and in-person students can be present and interact using telerobotics devices such as the Kubi Telepresence robot stand that looks around and the Double Robot that roams around. With these telepresence robots, the remote students have a seat at the table or desk instead of being on a screen on the wall. In asynchronous learning, participants access course materials flexibly on their schedules. Students are not required to be together at the same time. Mail correspondence, which is the oldest form of distance education, is an asynchronous delivery technology, as are message board forums, e-mail, video and audio recordings, print materials, voicemail, and fax. The five characteristics of technological innovations (compatibility, observability, relative advantage, complexity, and trialability) have a significant positive relationship with the digital literacy of users. Besides, observability, trialability, and digital skill were found to have a positive significant influence on digital literacy. The two methods can be combined. Many courses offered by both open universities and an increasing number of campus-based institutions use periodic sessions of residential or day teaching to supplement the sessions delivered at a distance. This type of mixed distance and campus-based education has recently come to be called "blended learning" or less often "hybrid learning". Many open universities use a blend of technologies and a blend of learning modalities (face-to-face, distance, and hybrid) all under the rubric of "distance learning". Distance learning can also use interactive radio instruction (IRI), interactive audio instruction (IAI), online virtual worlds, digital games, webinars, and webcasts, all of which are referred to as e-Learning. The rapid spread of film in the 1920s and radio in the 1930s led to proposals to use it for distance education. By 1938, at least 200 city school systems, 25 state boards of education, and many colleges and universities broadcast educational programs for public schools. One line of thought was to use radio as a master teacher. Experts in given fields broadcast lessons for pupils within the many schoolrooms of the public school system, asking questions, suggesting readings, making assignments, and conducting tests. This mechanizes education and leaves the local teacher only the tasks of preparing for the broadcast and keeping order in the classroom. The first large-scale implementation of radio for distance education took place in 1937 in Chicago. During a three-week school closure implemented in response to a polio outbreak that the city was experiencing, superintendent of Chicago Public Schools William Johnson and assistant superintendent Minnie Fallon implemented a program of distance learning that provided the city's elementary school students with instruction through radio broadcasts. A typical setup came in Kentucky in 1948 when John Wilkinson Taylor, president of the University of Louisville, teamed up with NBC to use radio as a medium for distance education. The chairman of the Federal Communications Commission endorsed the project and predicted that the "college-by-radio" would put "American education 25 years ahead". The university was owned by the city, and local residents would pay the low tuition rates, receive their study materials in the mail, and listen by radio to live classroom discussions that were held on campus. Physicist Daniel Q. Posin also was a pioneer in the field of distance education when he hosted a televised course through DePaul University. Charles Wedemeyer of the University of Wisconsin–Madison also promoted new methods. From 1964 to 1968, the Carnegie Foundation funded Wedemeyer's Articulated Instructional Media Project (AIM) which brought in a variety of communications technologies aimed at providing learning to an off-campus population. The radio courses faded away in the 1950s. Many efforts to use television along the same lines proved unsuccessful, despite heavy funding by the Ford Foundation. From 1970 to 1972 the Coordinating Commission for Higher Education in California funded Project Outreach to study the potential of tele-courses. The study included the University of California, California State University, and community colleges. This study led to coordinated instructional systems legislation allowing the use of public funds for non-classroom instruction and paved the way for the emergence of tele-courses as the precursor to the online courses and programs of today. The Coastline Community Colleges, The Dallas County Community College District, and Miami Dade Community College led the way. The Adult Learning Service of the US Public Broadcasting Service came into being and the "wrapped" series, and individually produced tele-course for credit became a significant part of the history of distance education and online learning. The widespread use of computers and the Internet has made distance learning easier and faster, and today virtual schools and virtual universities deliver full curricula online. The first online courses for graduate and undergraduate credit were offered in 1985 by Connected Education through The New School in New York City, with students earning the MA in Media Studies completely online via computer conferencing, with no in-person requirements. This was followed in 1986 by the University of Toronto through the Graduate School of Education (then called OISE: the Ontario Institute for Studies in Education), offering a course in "Women and Computers in Education", dealing with gender issues and educational computing. The first new and fully online university was founded in 1994 as the Open University of Catalonia, headquartered in Barcelona, Spain. In 1999 Jones International University was launched as the first fully online university accredited by a regional accrediting association in the US. Between 2000 and 2008, enrollment in distance education courses increased rapidly almost every country in both developed and developing countries. Many private, public, non-profit, and for-profit institutions worldwide now offer distance education courses from the most basic instruction through to the highest levels of degree and doctoral programs. New York University and International University Canada, for example, offer online degrees in engineering and management-related fields through NYU Tandon Online. Levels of accreditation vary: widely respected universities such as Stanford University and Harvard now deliver online courses—but other online schools receive little outside oversight, and some are fraudulent, i.e., diploma mills. In the US, the Distance Education Accrediting Commission (DEAC) specializes in the accreditation of distance education institutions. In the United States in 2011, it was found that a third of all the students enrolled in postsecondary education had taken an accredited online course in a postsecondary institution. Growth continued. In 2013 the majority of public and private colleges offered full academic programs online. Programs included training in the mental health, occupational therapy, family therapy, art therapy, physical therapy, and rehabilitation counseling fields. By 2008, online learning programs were available in the United States in 44 states at the K-12 level. Internet forums, online discussion groups, and online learning community can contribute to a distance education experience. Research shows that socialization plays an important role in some forms of distance education. Paced and self-paced models Kaplan and Haenlein classify distance education into four groups according to "Time dependency" and "Number of participants": Paced models are a familiar mode since they are used almost exclusively in campus-based schools. Institutes that offer both distance and campus programs usually use paced models so that teacher workload, student semester planning, tuition deadlines, exam schedules, and other administrative details can be synchronized with campus delivery. Student familiarity and the pressure of deadlines encourage students to readily adapt to and usually succeed in paced models. However, student freedom is sacrificed as a common pace is often too fast for some students and too slow for others. In additional life events, professional or family responsibilities can interfere with a student's capability to complete tasks to an external schedule. Finally, paced models allow students to readily form communities of inquiry and to engage in collaborative work. Self-paced courses maximize student freedom, as not only can students commence studies on any date, but they can complete a course in as little time as a few weeks or up to a year or longer. Students often enroll in self-paced study when they are under pressure to complete programs, have not been able to complete a scheduled course, need additional courses, or have pressure which precludes regular study for any length of time. The self-paced nature of the programming, though, is an unfamiliar model for many students and can lead to excessive procrastination, resulting in course incompletion. Assessment of learning can also be challenging as exams can be written on any day, making it possible for students to share examination questions with resulting loss of academic integrity. Finally, it is extremely challenging to organize collaborative work activities, though some schools are developing cooperative models based upon networked and connectivist pedagogies for use in self-paced programs. Benefits Distance learning can expand access to education and training for both general populace and businesses since its flexible scheduling structure lessens the effects of the many time-constraints imposed by personal responsibilities and commitments. Furthermore, the use of multimodal content such as videos, simulations, and interactive media enhances learner engagement and accommodates diverse learning styles (Veletsianos, 2020). Devolving some activities off-site alleviates institutional capacity constraints arising from the traditional demand on institutional buildings and infrastructure. As a result, more classes can be offered and enable students to enroll in more of their required classes on time and prevent delayed graduation. Furthermore, there is the potential for increased access to more experts in the field and to other students from diverse geographical, social, cultural, economic, and experiential backgrounds. As the population at large becomes more involved in lifelong learning beyond the normal schooling age, institutions can benefit financially, and adult learning business courses may be particularly lucrative. Distance education programs can act as a catalyst for institutional innovation and are at least as effective as face-to-face learning programs, especially if the instructor is knowledgeable and skilled. Distance education can also provide a broader method of communication within the realm of education. With the many tools and programs that technological advancements have to offer, communication appears to increase in distance education amongst students and their professors, as well as students and their classmates. The distance educational increase in communication, particularly communication amongst students and their classmates, is an improvement that has been made to provide distance education students with as many of the opportunities as possible as they would receive in in-person education. The improvement being made in distance education is growing in tandem with the constant technological advancements. Present-day online communication allows students to associate with accredited schools and programs throughout the world that are out of reach for in-person learning. By having the opportunity to be involved in global institutions via distance education, a diverse array of thought is presented to students through communication with their classmates. This is beneficial because students have the opportunity to "combine new opinions with their own, and develop a solid foundation for learning". It has been shown through research that "as learners become aware of the variations in interpretation and construction of meaning among a range of people [they] construct an individual meaning", which can help students become knowledgeable of a wide array of viewpoints in education. To increase the likelihood that students will build effective ties with one another during the course, instructors should use similar assignments for students across different locations to overcome the influence of co-location on relationship building. The high cost of education affects students in higher education, and distance education may be an alternative in order to provide some relief. Distance education has been a more cost-effective form of learning, and can sometimes save students a significant amount of money as opposed to traditional education. Distance education may be able to help to save students a considerable amount financially by removing the cost of transportation. In addition, distance education may be able to save students from the economic burden of high-priced course textbooks. Many textbooks are now available as electronic textbooks, known as e-textbooks, which can offer digital textbooks for a reduced price in comparison to traditional textbooks. Also, the increasing improvements in technology have resulted in many school libraries having a partnership with digital publishers that offer course materials for free, which can help students significantly with educational costs. Within the class, students are able to learn in ways that traditional classrooms would not be able to provide. It is able to promote good learning experiences and therefore, allow students to obtain higher satisfaction with their online learning. For example, students can review their lessons more than once according to their needs. Students can then manipulate the coursework to fit their learning by focusing more on their weaker topics while breezing through concepts that they already have or can easily grasp. When course design and the learning environment are at their optimal conditions, distance education can lead students to higher satisfaction with their learning experiences. Studies have shown that high satisfaction correlates to increased learning. For those in a healthcare or mental health distance learning program, online-based interactions have the potential to foster deeper reflections and discussions of client issues as well as a quicker response to client issues, since supervision happens on a regular basis and is not limited to a weekly supervision meeting. This also may contribute to the students feeling a greater sense of support, since they have ongoing and regular access to their instructors and other students. Distance learning may enable students who are unable to attend a traditional school setting, due to disability or illness such as decreased mobility and immune system suppression, to get a good education. Children who are sick or are unable to attend classes are able to attend them in "person" through the use of robot proxies. This helps the students have experiences in the classroom and social interaction that they are unable to receive at home or the hospital, while still keeping them in a safe learning environment. Over the last few years[when?] more students are entering safely back into the classroom thanks to the help of robots. An article from the New York Times, "A Swiveling Proxy Will Even Wear a Tutu", explains the positive impact of virtual learning in the classroom, and another explains how even a simple, stationary telepresence robot can help. Distance education may provide equal access regardless of socioeconomic status or income, area of residence, gender, race, age, or cost per student. Applying universal design strategies to distance learning courses as they are being developed (rather than instituting accommodations for specific students on an as-needed basis) can increase the accessibility of such courses to students with a range of abilities, disabilities, learning styles, and native languages. Distance education graduates, who would never have been associated with the school under a traditional system, may donate money to the school. Distance learning offers individuals a unique opportunity to benefit from the expertise and resources of the best universities currently available. Moreover, the online environment facilitates pedagogical innovation such as new program structures and formats. Students have the ability to collaborate, share, question, infer, and suggest new methods and techniques for continuous improvement of the content. The ability to complete a course at a pace that is appropriate for each individual is the most effective manner to learn given the personal demands on time and schedule. Distance learning can also reduce the phenomenon of rural exodus by enabling students from remote regions to remain in their hometowns while pursuing higher education. Eliminating the distance barrier to higher education can also increase the number of alternatives open to students, and foster greater competition between institutions of higher learning regardless of geography. Criticism Barriers to effective distance education include obstacles such as domestic distractions and unreliable technology, as well as students' program costs, adequate contact with teachers and support services, and a need for more experience. Additionally, students’ lack of digital literacy and self-regulation skills have contributed to increased dropout rates, emphasizing the need for institutional training support. Some students attempt to participate in distance education without proper training with the tools needed to be successful in the program. Students must be provided with training opportunities (if needed) on each tool that is used throughout the program. The lack of advanced technology skills can lead to an unsuccessful experience. Schools have a responsibility to adopt a proactive policy for managing technology barriers. Time management skills and self-discipline in distance education is just as important as complete knowledge of the software and tools being used for learning. The results of a study of Washington state community college students showed that distance- learning students tended to drop out more often than their traditional counterparts due to difficulties in language, time management, and study skills. According to Pankaj Singhm, director of Nims University, "distance learning benefits may outweigh the disadvantages for students in such a technology-driven society, however before indulging into the use of educational technology a few more disadvantages should be considered." He describes that over multiple years, "all of the obstacles have been overcome and the world environment for distance education continues to improve." Pankaj Singhm also claims there is a debate to distance education stating, "due to a lack of direct face-to-face social interaction. However, as more people become used to personal and social interaction online (for example dating, chat rooms, shopping, or blogging), it is becoming easier for learners to both project themselves and socializes with others. This is an obstacle that has dissipated." Not all courses required to complete a degree may be offered online. Health care profession programs in particular require some sort of patient interaction through field work before a student may graduate. Studies have also shown that students pursuing a medical professional graduate degree who are participating in distance education courses, favor a face to face communication over professor-mediated chat rooms and/or independent studies. However, this is little correlation between student performance when comparing the previous different distance learning strategies. There is a theoretical problem with the application of traditional teaching methods to online courses because online courses may have no upper size limit. Daniel Barwick noted that there is no evidence that large class size is always worse or that small class size is always better, although a negative link has been established between certain types of instruction in large classes and learning outcomes; he argued that higher education has not made a sufficient effort to experiment with a variety of instructional methods to determine whether the large class size is always negatively correlated with a reduction in learning outcomes. Early proponents of Massive Open Online Courses (MOOCs) saw them as just the type of experiment that Barwick had pointed out was lacking in higher education, although Barwick himself has never advocated for MOOCs. There may also be institutional challenges. Distance learning is new enough that it may be a challenge to gain support for these programs in a traditional brick-and-mortar academic learning environment. Furthermore, it may be more difficult for the instructor to organize and plan a distance learning program, especially since many are new programs and their organizational needs are different from a traditional learning program. Additionally, though distance education offers industrial countries the opportunity to become globally informed, there are still negative sides to it. Hellman states that "These include its cost and capital intensiveness, time constraints and other pressures on instructors, the isolation of students from instructors and their peers, instructors' enormous difficulty in adequately evaluating students they never meet face-to-face, and drop-out rates far higher than in classroom-based courses." A more complex challenge of distance education relates to cultural differences between students and teachers and among students. Distance programs tend to be more diverse as they could go beyond the geographical borders of regions, countries, and continents, and cross the cultural borders that may exist concerning race, gender, and religion. That requires a proper understanding and awareness of the norms, differences, preconceptions, and potential conflicting issues. Assessments Tools have been developed to assess the quality of distance education. Walker developed a survey instrument known as the Distance Education Learning Environment Survey (DELES), which examines instructor support, student interaction, and collaboration, personal relevance, authentic learning, active learning, and student autonomy. Harnish and Reeves provide a systematic approach based on training, implementation, system usage, communication, and support. Educational technology The modern use of electronic educational technology (also called e-learning) facilitates distance learning and independent learning through the extensive use of information and communications technology (ICT), replacing traditional content delivery with postal correspondence. Instruction can be synchronous and asynchronous online communication in an interactive learning environment or virtual communities, in lieu of a physical classroom. "The focus is shifted to the education transaction in the form of a virtual community of learners sustainable across time." One of the most significant issues encountered in the mainstream correspondence model of distance education is transactional distance, which results from the lack of appropriate communication between learner and teacher. This gap has been observed to become wider if there is no communication between the learner and teacher and has direct implications for the learning process and future endeavors in distance education. Distance education providers began to introduce various strategies, techniques, and procedures to increase the amount of interaction between learners and teachers. These measures e.g. more frequent face-to-face tutorials, and increased use of information and communication technologies including teleconferencing and the Internet, were designed to close the gap in transactional distance. Credentials Online credentials for learning are digital credentials that are offered in place of traditional paper credentials for a skill or educational achievement. Despite their growth, the acceptability of MOOCs and online certificates varies widely among employers, and questions remain about their recognition and credibility (Kaplan & Haenlein, 2016). Directly linked to the accelerated development of internet communication technologies, the development of digital badges, electronic passports and massive open online courses (MOOCs) have a very direct bearing on our understanding of learning, recognition and levels as they pose a direct challenge to the status quo. It is useful to distinguish between three forms of online credentials: Test-based credentials, online badges, and online certificates. See also References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:New_Zealand-centric] | [TOKENS: 93] |
Category:New Zealand-centric The pages listed below have been identified as containing information specific to New Zealand without adequately covering differences found in other parts of the world. Use {{Globalize|article|New Zealand}} or {{Globalize|section|New Zealand}} in an article to place it into this category. Pages in category "New Zealand-centric" The following 8 pages are in this category, out of 8 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Doing_business_as] | [TOKENS: 1588] |
Contents Trade name A trade name, also known as a trading name, business name or operating name, is a pseudonym used by companies and other organizations that do not operate under their registered legal name. Registering the trade name with a relevant government body is often required. In a number of countries, the phrase "trading as" (abbreviated to t/a) is used to designate a trade name. In the United States, the phrase "doing business as" (abbreviated to DBA, dba, d.b.a., or d/b/a) is used, among others, such as assumed business name or fictitious business name. In Canada, "operating as" (abbreviated to o/a) and "trading as" are used, although "doing business as" is also sometimes used. A company typically uses a trade name to conduct business using a simpler name rather than using their formal and often lengthier name. Trade names are also used when a preferred name cannot be registered, often because it may already be registered or is too similar to a name that is already registered. Online platforms such as Wix, Namelix, Looka, LegalZoom, and Elementor also provide tools for creating and searching for trade names as part of establishing a business website. Legal aspects Using one or more fictitious business names does not create additional separate legal entities. The distinction between a registered legal name and a fictitious business name, or trade name, is important because fictitious business names do not always identify the entity that is legally responsible. Legal agreements (such as contracts) are normally made using the registered legal name of the business. If a corporation fails to consistently adhere to such important legal formalities like using its registered legal name in contracts, it may be subject to piercing of the corporate veil. In English, trade names are generally treated as proper nouns. By country In Argentina, a trade name is known as a nombre de fantasía ('fantasy' or 'fiction' name), and the legal name of business is called a razón social (social name). In Brazil, a trade name is known as a nome fantasia ('fantasy' or 'fiction' name), and the legal name of business is called razão social (social name). In some Canadian jurisdictions, such as Ontario, when a businessperson writes a trade name on a contract, invoice, or cheque, they must also add the legal name of the business. Numbered companies will very often operate as something other than their legal name, which is unrecognizable to the public. In Chile, a trade name is known as a nombre de fantasía ('fantasy' or 'fiction' name), and the legal name of business is called a razón social (social name). In Ireland, businesses are legally required to register business names where these differ from the surname(s) of the sole trader or partners, or the legal name of a company. The Companies Registration Office publishes a searchable register of such business names. In Japan, the word yagō (屋号) is used. In Colonial Nigeria, certain tribes had members that used a variety of trading names to conduct business with the Europeans. Two examples were King Perekule VII of Bonny, who was known as Captain Pepple in trade matters, and King Jubo Jubogha of Opobo, who bore the pseudonym Captain Jaja. Both Pepple and Jaja would bequeath their trade names to their royal descendants as official surnames upon their deaths. In Singapore, there is no filing requirement for a "trading as" name, but there are requirements for disclosure of the underlying business or company's registered name and unique entity number. In the United Kingdom, there is no filing requirement for a business name, defined as "any name under which someone carries on business" that, for a company or limited liability partnership, "is not its registered name", but there are requirements for disclosure of the owner's true name and some restrictions on the use of certain names and sensitive words, and there are also regulations concerning disclosure of the company name (the legal name of the company) for a company, the name of the owner for a sole trader, or the names of the partners for a partnership. The Office for Students, the higher education regulator for England, uses the term trading name in the register of higher education providers, and requires these to be registered. The Charity Commission of England and Wales uses the terms working name and operating name on the register of charities, with the term working name being used in the Charities Act 2011 (as amended by the Charities Act 2022). The term operating name is also used for government agencies. A minority of U.S. states, including Washington, still use the term trade name to refer to "doing business as" (DBA) names. In most U.S. states now, however, DBAs are officially referred to using other terms. Almost half of the states, including New York and Oregon, use the terms assumed business name or assumed name; nearly as many, including Pennsylvania, use the term fictitious name. For consumer protection purposes, many U.S. jurisdictions require businesses operating with fictitious names to file a DBA statement, though names including the first and last name of the owner may be accepted. This also reduces the possibility of two local businesses operating under the same name, although some jurisdictions do not provide exclusivity for a name, or may allow more than one party to register the same name. Note, though, that this is not a substitute for filing a trademark application. A DBA filing carries no legal weight in establishing trademark rights. In the U.S., trademark rights are acquired by use in commerce, but there can be substantial benefits to filing a trademark application. Sole proprietors are the most common users of DBAs. Sole proprietors are individual business owners who run their businesses themselves. Since most people in these circumstances use a business name other than their own name,[citation needed] it is often necessary for them to get DBAs. Generally, a DBA must be registered with a local or state government, or both, depending on the jurisdiction. For example, California, Texas and Virginia require a DBA to be registered with each county (or independent city in the case of Virginia) where the owner does business. Maryland and Colorado have DBAs registered with a state agency. Virginia also requires corporations and LLCs to file a copy of their registration with the county or city to be registered with the State Corporation Commission. DBA statements are often used in conjunction with a franchise. The franchisee will have a legal name under which it may sue and be sued, but will conduct business under the franchiser's brand name (which the public would recognize). A typical real-world example can be found in a well-known pricing mistake case, Donovan v. RRL Corp. (2001), where the named defendant, RRL Corporation, was a Lexus car dealership doing business as "Lexus of Westminster", but remaining a separate legal entity from Lexus, a division of Toyota Motor Sales, USA, Inc. In California, filing a DBA statement also requires that a notice of the fictitious name be published as a public legal notice in local newspapers for some set period to inform the public of the owner's intent to operate under an assumed name. The intention of the law is to protect the public from fraud, by compelling the business owner to first file or register his fictitious business name with the county clerk, and then making a further public record of it by publishing it in a newspaper. Several other states, such as Illinois, require print notices as well. In Uruguay, a trade name is known as a nombre fantasía, and the legal name of business is called a razón social.[citation needed] See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-156] | [TOKENS: 10515] |
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Puerto_Rico] | [TOKENS: 3178] |
Contents History of the Jews in Puerto Rico The history of the Jews in Puerto Rico dates back to the 1400s. Jewish immigration to Puerto Rico began in the 15th century with the arrival of the anusim (variously called conversos, Crypto-Jews, Secret Jews or marranos) who accompanied Christopher Columbus on his second voyage. An open Jewish community did not flourish in the colony because Judaism was prohibited by the Spanish Inquisition. However, many migrated to mountainous parts of the island, far from the central power of San Juan, and continued to self-identify as Jews and practice Crypto-Judaism. It would be hundreds of years before an open Jewish community was established on the island. Very few American Jews settled in Puerto Rico after it was ceded by Spain to the United States under the terms of the 1898 Treaty of Paris, which ended the Spanish–American War. The first large group of Jews to settle in Puerto Rico were refugees fleeing German–occupied Europe in the 1930s and 1940s. The second influx came in the 1950s, when thousands of Cuban Jews (most of Eastern-European descent) fled after Fidel Castro came to power. The majority immigrated to Miami, Florida, but a sizable portion chose to establish and integrate themselves on the neighboring island because of Puerto Rico's cultural, linguistic, racial, and historic similarities to Cuba. Puerto Rican Jews have made many contributions in multiple fields, including business, commerce, education, and entertainment. Puerto Rico has the largest Jewish community in the Caribbean, with over 3,000 Jewish inhabitants. It is also the only Caribbean island in which all three major Jewish denominations—Orthodox, Conservative, and Reform—are represented. First Jews to arrive in Puerto Rico According to historians, the first Jews to arrive in Puerto Rico were conversos, Jews who were forced to convert to Catholicism and were members of Christopher Columbus's crew during his second voyage to the so-called "New World", arriving in Puerto Rico on November 19, 1493. Historians believe that Luis de Torres, who spoke Hebrew among other languages and who accompanied Columbus as his interpreter, was the first "converso" Jew to set foot in Puerto Rico. The Jews who arrived and settled in Puerto Rico were referred to as "Crypto-Jews" or "secret Jews". In 1478, the Catholic Monarchs of Spain, Ferdinand II of Aragon and Isabella I of Castile, established an ecclesiastical tribunal known as the Spanish Inquisition. It was intended to maintain Catholic orthodoxy in their kingdoms. Hundreds of Jews were killed, and their synagogues destroyed. One of the consequences of these disturbances was the mass forced conversion of Jews. When the Crypto Jews arrived on the island of Puerto Rico, they were hoping to avoid religious scrutiny, but the Inquisition followed the colonists. The Inquisition maintained no rota or religious court in Puerto Rico. However, heretics were written up and if necessary remanded to regional Inquisitional tribunals in Spain or elsewhere in the Western Hemisphere. As a result, many secret Jews settled the island's remote mountainous interior far from the concentrated centers of power in San Juan and lived quiet lives. They practiced Crypto-Judaism which meant that they secretly practiced Judaism while publicly professing to be Roman Catholic. Still, since Jews were not permitted to worship, the Crypto Jews eventually intermarried with Catholics and therefore, Puerto Rico virtually had no Jewish history of which to speak. 19th century By the 19th century, the Spanish Crown had lost most of its possessions in the Americas. Two of its remaining possessions were Puerto Rico and Cuba, both of which were demanding more autonomy and had pro-independence movements. The Spanish Crown issued the Royal Decree of Graces (Real Cédula de Gracias) which was originated August 10, 1815, with the intention of attracting European settlers who were not of Spanish origin to the islands. The Spanish government, believing that the independence movements would lose their popularity, granted land and initially gave settlers "Letters of Domicile". However, those Europeans who were of the Jewish and Protestant faith were excluded from direct acquisition of state land since it was expected of the settlers to swear loyalty to the Spanish Crown and allegiance to the Roman Catholic Church.[failed verification] The opening of new lands to Catholics resulted in some sales of existing cultivated lands to others. This, however, did not keep people of Jewish descent from settling in Puerto Rico. Among the Puerto Rican Jews who lived in Puerto Rico in the 19th century was Mathias Brugman. Mathias Brugman (1811–1868) was the son of Pierre Brugman from Curaçao of Dutch-Jewish ancestry and Isabel Duliebre from Puerto Rico. His parents met and married in New Orleans, Louisiana where Brugman was born, raised and educated. The Brugman family moved to Puerto Rico and settled in the City of Mayagüez where Brugman met and married Ana Maria Laborde. He opened a colmado (grocery store) and became rather successful, only to lose a good part of his fortune attempting to grow coffee. Like many other residents of Puerto Rico at the time, he resented the political injustices practiced by Spain on the island. This led him to become a believer in the cause of the Puerto Rican independence movement. Brugman admired independence advocates Ramón Emeterio Betances and Segundo Ruiz Belvis. Together with his son, Hector, he joined them in a conspiracy to revolt against Spain and formed a revolutionary committee code named: "Capá Prieto" (a tree known as Spanish Elm, Ecuador Laurel, cypre or salmwood and used for its wood to build ships, among other things). On September 23, 1868, Brugman and his son participated in the short-lived revolt against Spanish rule known as El "Grito de Lares" (English: Cry of Lares). Brugman and his son refused to surrender to the Spanish authorities and eventually were executed. After the failed revolution, the Spanish Courts passed the "Acta de Culto Condicionado" (Conditional Cult Act) in 1870. The law was an attempt to attract more settlers who would be faithful to the Spanish Crown by granting the right of religious freedom to all who wished to worship a religion other than Catholicism. Even so, the first synagogue was not established until after Puerto Rico was ceded by Spain to the United States at the end of the Spanish–American War in 1898. In the late 1800s during the Spanish–American War many Jewish American servicemen gathered together with local Puerto Rican Jews at the Old Telegraph building in Ponce to hold religious services. Rabbi Adolph Spiegel was among the servicemen who stayed in Puerto Rico. He led services from 1899 to 1905 in Ponce. Rabbi Spiegel played an instrumental role in the establishment of the first Jewish Synagogue in Ponce. 20th century Jewish-American soldiers were assigned to the military bases in Puerto Rico and many choose to stay and live on the island. Large numbers of Jewish immigrants began to arrive in Puerto Rico in the 1930s as refugees from Nazi occupied Europe. The majority settled in the island's capital, San Juan, where in 1942 they established the first Jewish Community Center of Puerto Rico. The President of the Puerto Rican Senate, Luis Muñoz Marín together with Governor Rexford Tugwell, the last non-Puerto Rican Governor of Puerto Rico appointed by an American president, helped advance legislation geared towards agricultural reform, economic recovery and industrialization. This program became known as Operation Bootstrap. As a result of the program, many Jews migrated to the city of Ponce located in the southern region of the island and worked in the agricultural industry. Operation Bootstrap also attracted clothing manufacturers from New York and many of the people in the industry who came to the island were Jews. In 1942, President Franklin D. Roosevelt appointed Aaron Cecil Snyder (1907–1959), born in Baltimore, Maryland as Associate Justice of the Supreme Court of Puerto Rico. Snyder became the first Jew and the last non-Puerto Rican appointed to that court. In 1953, Governor Luis Muñoz Marín appointed him Chief Justice of the Supreme Court of Puerto Rico, the first appointment that a Puerto Rican governor made to the court, addressing the nomination to "A. Cecilio Snyder". Snyder actually used the name "Cecilio" when sworn in as Chief Justice. After his departure from the court, Snyder practiced law in San Juan until his death in 1959. In 1952, Puerto Rico achieved U.S. commonwealth status and officially became the Commonwealth of Puerto Rico (Spanish: "Estado Libre Asociado de Puerto Rico"). That same year a handful of American Jews established the island's first synagogue in the former residence of William Korber, a wealthy Puerto Rican of German descent, which was designed and built by Czech architect Antonin Nechodoma. The synagogue, called Sha'are Zedeck, hired its first rabbi in 1954. After the success of the Cuban Revolution, led by Fidel Castro in 1959, almost all of Cuba's 15,000 Jews went into exile. The majority of them fled to Miami, Florida; however, Puerto Rico also received a large influx of Jewish emigres from Cuba.[better source needed] Abe Fortas, who was an associate judge for the United States Supreme Court, and the son of Orthodox Jews, was a friend of Luis Muñoz Marin and frequented Puerto Rico often during Roosevelt's, Kennedy's, and Lyndon B. Johnson's administrations. He participated in the drafting of the Constitution of Puerto Rico and gave Luis Muñoz Marin and his administrators legal advice whenever called upon. According to Abe Fortas's biographer Laura Kalman, "Puerto Rico engaged Fortas. It became the one cause to which he was unconditionally committed." Establishment of a Jewish community Puerto Rico is home to the largest and wealthiest Jewish community in the Caribbean with almost 3,000 Jewish inhabitants. Some Puerto Ricans have converted to Judaism, not only as individuals but as entire families. Puerto Rico is the only Caribbean island in which the Conservative, Reform and Orthodox Jewish movements are represented. Sha'are Zedeck, established in 1953, represents Conservative Judaism; Temple Beth Shalom, established in 1967, represents Reform Judaism; and Chabad Center, established in 1997, represents Orthodox Judaism. The Reform congregation utilizes the English, Spanish, and Hebrew languages in their teachings, the Conservative congregation also uses English, Hebrew and Spanish. On November 30, 2005, the Puerto Rican Jewish community established their first synagogue outside of the Metropolitan San Juan area. The synagogue, which is located in the City of Mayagüez in the island's west coast, is called "Centro Hasidico Puertorriqueno Toiras Jesed". The Sha'are Zedeck, which has been designated by the Puerto Rican government as a National Historic Monument., and Reform congregations are located in San Juan and the Chabad Center is located in Isla Verde, in the city of Carolina. In the 1950s, the Puerto Rican musician Augusto Rodríguez, founder of the Choir of the University of Puerto Rico, founded the Hebrew Festival Chorus of San Juan's Jewish Community. Jewish influence in Puerto Rican and popular culture The municipality of Yauco has a street with the word "Judio" (Jewish) in it. It is the “Calle Cuesta de los Judios” which in the English language means "Jewish Slope Street" Puerto Rican Jews have made many contributions to the Puerto Rican way of life. Their contributions can be found, but are not limited to, the fields of education, commerce and entertainment. Among the many successful businesses which they have established are Supermercados Pueblo (Pueblo Supermarkets) founded by George and Harold Toppel, Almacenes Kress (clothing store), founded by Jorge Artime, Doral Bank, Pitusa and Me Salve, founded by Israel Kopel. They have also made an impact in Puerto Rico's music industry. In 1970, Raphy Leavitt organized a band with an original sound and style that became one of Puerto Rico's greatest salsa orchestras, "La Selecta". He selected the band's repertoire from songs with a particular, positive social message and philosophy, and arranged his new band's sound to be as raw and powerful as the typical all-trombone salsa sound in vogue at the time. This genre was made popular by Willie Colón, but La Selecta featured the addition of trumpets to lighten up the sound melodically. Brenda K. Starr is a salsa singer who in 2002 won two Latin Grammy Awards, one for "Best Salsa Album", for "Temptation" and the other in the category "Best Salsa Single" for "Por Ese Hombre". In 2006, the Billboard Latin Music Awards nominated her for a "Best Salsa Single" award for "Tu Eres". Puerto Rican literature has also been enriched with the works of Quiara Alegría Hudes who wrote the book for Broadway's musical In the Heights. Her play, 'Elliot, a Soldier's Fugue, was a Pulitzer Prize finalist in 2007.; author (history based fiction writer) the Ethiopian Yosef Alfredo Antonio Ben-Jochannan whose two better known works are "Black Man of the Nile" and "His Family and Africa: Mother of Major Western Religions"; author and poet Aurora Levins Morales with her work "Remedios: Stories of Earth and Iron from the History of Puertorriqueñas" and Micol Ostow, author of "Emily Goldberg Learns to Salsa";[citation needed] and author Stephen Earley Jordan II's short story "The Jew of Condado" (2014). In July 2003, members and friends of Temple Beth Shalom published "What's Cooking/ Que se Cocina en Puerto Rico", a Spanish/English cookbook which includes Jewish recipes and Jewish holidays. Among the notable people with Puerto Rican and Jewish roots are: Geraldo Rivera, David Blaine, Bruno Mars, Benjamin Agosto, Hila Levy, Ian Gomez, Leslie Kritzer, Julio Kaplan, Joaquin Phoenix, and Jenna Wolfe. The American television sitcom Welcome Back, Kotter, which originally aired on the ABC network from September 9, 1975, to June 8, 1979, had a character named Juan Epstein, played by Robert Hegyes. According to script Epstein was a fiercely proud Puerto Rican Jew. In the 2008 film Nothing Like the Holidays, actor John Leguizamo plays the role of Mauricio Rodriguez, a Puerto Rican whose wife Sarah (played by actress Debra Messing) is of the Jewish faith. In one scene of the film, the family discusses the fact that there are many Jewish Puerto Ricans and that in San Juan there is a large Jewish community. Resolution 1480 On October 31, 2005, the Senate of Puerto Rico approved Senate Resolution 1480, recognizing the contributions which the Jewish community has made to the way of life of Puerto Rico and the friendship which exists between the peoples of Puerto Rico and Israel. See also References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.