text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_ref-7] | [TOKENS: 1793] |
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_ref-hiemstra1998_9-0] | [TOKENS: 1793] |
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_ref-10] | [TOKENS: 1793] |
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Procedurally_generated] | [TOKENS: 2117] |
Contents Procedural generation In computing, procedural generation is a method of creating data algorithmically as opposed to manually, typically through a combination of human-generated content and algorithms coupled with computer-generated randomness and processing power. In computer graphics, it is commonly used to create textures and 3D models. In video games, it is used to automatically create large amounts of content in a game. Depending on the implementation, advantages of procedural generation can include smaller file sizes, larger amounts of content, and randomness for less predictable gameplay. Overview The term procedural refers to the process that computes a particular function. Fractals are geometric patterns which can often be generated procedurally. Commonplace procedural content includes textures and meshes. Sound is often also procedurally generated, and has applications in both speech synthesis as well as music. It has been used to create compositions in various genres of electronic music by artists such as Brian Eno who popularized the term "generative music". Procedural generation was originally created as an instrument for video games, aiding in generating levels, textures and complete worlds with little human contribution. Procedurally generated elements have appeared in video games since the 1990s: The Elder Scrolls II: Daggerfall takes place in a mostly procedurally generated world, giving a world roughly two thirds the actual size of the British Isles. Soldier of Fortune from Raven Software uses simple routines to detail enemy models, while its sequel featured a randomly generated level mode. Avalanche Studios employed procedural generation to create a large and varied group of detailed tropical islands for Just Cause. No Man's Sky, a game developed by games studio Hello Games, is all based upon procedurally generated elements. The modern demoscene uses procedural generation to package a great deal of audiovisual content into relatively small programs. New methods and applications are presented annually in conferences such as the IEEE Conference on Computational Intelligence and Games and the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. Particularly in the application of procedural generation with video games, which are intended to be highly replayable, there are concerns that procedural systems can generate infinite numbers of worlds to explore, but without sufficient human guidance and rules to guide these. The result has been called "procedural oatmeal", a term coined by writer Kate Compton, in that while it is possible to mathematically generate thousands of bowls of oatmeal with procedural generation, they will be perceived to be the same by the user, and lack the notion of perceived uniqueness that a procedural system should aim for. In tabletop role-playing games Using procedural generation in games had origins in tabletop role playing games (RPG). The leading tabletop system, Advanced Dungeons & Dragons, provided ways for the "dungeon master" to generate dungeons and terrain using random die rolls, expanded in later editions with complex branching procedural tables. Strategic Simulations under license from TSR released the Dungeon Master's Assistant, a computer program that generated dungeons based on these published tables. Tunnels & Trolls, published by Flying Buffalo, was designed primarily around solitary play and used similar procedural generation for its dungeons. Other tabletop RPGs borrowed similar concepts in procedural generation for various world elements. Many online tools for Dungeon Masters now use procedural generation to varying degrees.[citation needed] In video games Prior to graphically oriented video games, roguelike games, a genre directly inspired by Dungeons & Dragons adopted for solitary play, heavily utilized procedural generation to randomly produce dungeons, in the same manner that tabletop systems had done. Such early games include Beneath Apple Manor (1978) and the genre's namesake, Rogue (1980). The procedural generation system in roguelikes would create dungeons in ASCII- or regular tile-based systems and define rooms, hallways, monsters, and treasure to challenge the player. Roguelikes, and games based on the roguelike concepts, allow the development of complex gameplay without having to spend excessive time in creating a game's world. 1978's Maze Craze for the Atari VCS used an algorithm to generate a random, top-down maze for each game. Some games used pseudorandom number generators. These PRNGs were often used with predefined seed values in order to generate very large game worlds that appeared to be premade. The Sentinel supposedly had 10,000 different levels stored in only 48 and 64 kilobytes. An extreme case was Elite, which was originally planned to contain a total of 248 (approximately 282 trillion) galaxies with 256 solar systems each. However, the publisher was afraid that such a gigantic universe would cause disbelief in players, and eight of these galaxies were chosen for the final version. Other notable early examples include the 1985 game Rescue on Fractalus (that used fractals to procedurally create, in real time, the craggy mountains of an alien planet) and River Raid (the 1982 Activision game that used a pseudorandom number sequence generated by a linear feedback shift register in order to generate a scrolling maze of obstacles). Though modern computer games do not have the same memory and hardware restrictions that earlier games had, the use of procedural generation is frequently employed to create randomized games, maps, levels, characters, or other facets that are unique on each playthrough. Many modern roguelike games (sometimes referred to as "roguelites") have shifted away from the turn-based early roguelikes to incorporate gameplay of other video game genres, such as platformers or shoot 'em ups, though still retain elements of procedural generation in how gameplay maps and levels are generated to assure that the player has a means to complete each level along with permadeath. This also extends to power-ups and other items that populate the game's map, selected by the game through procedural generational rules to assure the game feels fair to the player so they feel they have the agency to win the game. Procedural generation is often used in loot systems of quest-driven games, such as action role-playing games and massive multiplayer online role playing games. Though quests may feature fixed rewards, other loot, such as weapons and armor, may be generated for the player based on the player-character's level, the quest's level, their performance in the quest, and other random factors. This often leads to loot having a rarity quality applied to reflect when the procedural generation system has produced an item with better-than-average attributes. For example, the Borderlands series is based on its procedural generation system which can create over a million unique guns and other equipment. Many open world or survival games procedurally create a game world from a random seed or one provided by the player, so that each playthrough is different. These generation systems create numerous pixel- or voxel-based biomes with distribution of resources, objects, and creatures. The player frequently has the ability to adjust some of the generation parameters, such as specifying the amount of water coverage in a world. Examples of such games include Dwarf Fortress, Minecraft, and Vintage Story. Procedural generation is also used in space exploration and trading games. Elite: Dangerous, through using the 400 billion known stars of the Milky Way Galaxy as its world basis, uses procedural generation to simulate the planets in these solar systems. Similarly, Star Citizen uses the technology to create seamlessly loaded planets among its hand-crafted universe. Outerra Anteworld is a video game in development that uses procedural generation and real world data to create a virtual replica of planet Earth in true scale. No Man's Sky, by using procedural generation, is the largest video game in history, featuring a universe of 18 quintillion planets across entire galaxies, which can be explored in flight or on foot. The planets all have their own uniquely diverse terrain, weather, flora, and fauna, as well as a number of space-faring alien species. The same content exists at the same places for all players (thanks to a single random seed number to their deterministic engine), which enables players to meet and share discoveries. In other areas As in video games, procedural generation is often used in film to create visually interesting and accurate spaces rapidly. This comes in a wide variety of applications. One application is imperfect factories, which are used by artists to rapidly generate many similar objects. This accounts for the fact that, in real life, no two objects are ever exactly alike. For instance, an artist can model a product for a grocery store, and then create an imperfect factory to generate many imperfect copies to populate a whole shelf. MASSIVE is a high-end computer animation and artificial intelligence software package used for generating crowd-related visual effects for film and television. It was developed to create fighting armies of hundreds of thousands of soldiers for Peter Jackson's The Lord of the Rings films automatically. Coherent noise can be extremely important to procedural workflow in film. Simplex noise is often faster with fewer artifacts, though an older function called Perlin noise may be used as well. Coherent noise, in this case, refers to a function that generates smooth pseudo-randomness in n dimensions. Poyck studied how procedurally generated cityscapes can be used to aid social simulations and to train self-driving cars. Procedural generation plays a pivotal part in the progression of digital twins, which are very detailed virtual replicas of real-world objects used for simulation, analysis, and planning.[citation needed] Future directions Neural networks have recently been employed to refine procedurally generated content. Combining classic randomization methods with deep learning provides new ways for generating audio, images, 3D objects, and other content types. This is especially useful in game level development; reinforcement learning allows the development of agents that play generated levels, serving as automatic content evaluators. Integrating procedural generation with deep learning alters the landscape of digital content creation. Zakaria et al. demonstrated that different deep learning methods for procedurally generating Sokoban levels have different strengths and weaknesses. Looking ahead, researchers are investigating methods to combine large language models (LLMs) with deep-learning powered procedural generation systems, aiming to enhance their adaptability. Zakaria suggests that "LLMs combined with reinforcement learning can create procedural assets that evolve dynamically based on real-time feedback". Zakaria investigated the application of advanced deep learning structures such as bootstrapped LSTM (Long short-term memory) generators and GANs (Generative adversarial networks) to upgrade procedural level design. They found that "diversity sampling consistently increases the numbers of generated solutions and signatures", showing that hybrid approaches help overcome problems like repetitive patterns or lack of variation. See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/World#cite_note-31] | [TOKENS: 5641] |
Contents World The world is the totality of entities, the whole of reality, or everything that exists. The nature of the world has been conceptualized differently in different fields. Some conceptions see the world as unique, while others talk of a "plurality of worlds". Some treat the world as one simple object, while others analyze the world as a complex made up of parts. In scientific cosmology, the world or universe is commonly defined as "the totality of all space and time; all that is, has been, and will be". Theories of modality talk of possible worlds as complete and consistent ways how things could have been. Phenomenology, starting from the horizon of co-given objects present in the periphery of every experience, defines the world as the biggest horizon, or the "horizon of all horizons". In philosophy of mind, the world is contrasted with the mind as that which is represented by the mind. Theology conceptualizes the world in relation to God, for example, as God's creation, as identical to God, or as the two being interdependent. In religions, there is a tendency to downgrade the material or sensory world in favor of a spiritual world to be sought through religious practice. A comprehensive representation of the world and our place in it, as is found in religions, is known as a worldview. Cosmogony is the field that studies the origin or creation of the world, while eschatology refers to the science or doctrine of the last things or of the end of the world. In various contexts, the term "world" takes a more restricted meaning associated, for example, with the Earth and all life on it, with humanity as a whole, or with an international or intercontinental scope. In this sense, world history refers to the history of humanity as a whole, and world politics is the discipline of political science studying issues that transcend nations and continents. Other examples include terms such as "world religion", "world language", "world government", "world war", "world population", "world economy", or "world championship". Etymology The English word world comes from the Old English weorold. The Old English is a reflex of the Common Germanic *weraldiz, a compound of weraz 'man' and aldiz 'age', thus literally meaning roughly 'age of man'; this word led to Old Frisian warld, Old Saxon werold, Old Dutch werolt, Old High German weralt, and Old Norse verǫld. The corresponding word in Latin is mundus, literally 'clean, elegant', itself a loan translation of Greek cosmos 'orderly arrangement'. While the Germanic word thus reflects a mythological notion of a "domain of Man" (compare Midgard), presumably as opposed to the divine sphere on the one hand and the chthonic sphere of the underworld on the other, the Greco-Latin term expresses a notion of creation as an act of establishing order out of chaos. Conceptions Different fields often work with quite different conceptions of the essential features associated with the term "world". Some conceptions see the world as unique: there can be no more than one world. Others talk of a "plurality of worlds". Some see worlds as complex things composed of many substances as their parts while others hold that worlds are simple in the sense that there is only one substance: the world as a whole. Some characterize worlds in terms of objective spacetime while others define them relative to the horizon present in each experience. These different characterizations are not always exclusive: it may be possible to combine some without leading to a contradiction. Most of them agree that worlds are unified totalities. Monism is a thesis about oneness: that only one thing exists in a certain sense. The denial of monism is pluralism, the thesis that, in a certain sense, more than one thing exists. There are many forms of monism and pluralism, but in relation to the world as a whole, two are of special interest: existence monism/pluralism and priority monism/pluralism. Existence monism states that the world is the only concrete object there is. This means that all the concrete "objects" we encounter in our daily lives, including apples, cars and ourselves, are not truly objects in a strict sense. Instead, they are just dependent aspects of the world-object. Such a world-object is simple in the sense that it does not have any genuine parts. For this reason, it has also been referred to as "blobject" since it lacks an internal structure like a blob. Priority monism allows that there are other concrete objects besides the world. But it holds that these objects do not have the most fundamental form of existence, that they somehow depend on the existence of the world. The corresponding forms of pluralism state that the world is complex in the sense that it is made up of concrete, independent objects. Scientific cosmology can be defined as the science of the universe as a whole. In it, the terms "universe" and "cosmos" are usually used as synonyms for the term "world". One common definition of the world/universe found in this field is as "[t]he totality of all space and time; all that is, has been, and will be". Some definitions emphasize that there are two other aspects to the universe besides spacetime: forms of energy or matter, like stars and particles, and laws of nature. World-conceptions in this field differ both concerning their notion of spacetime and of the contents of spacetime. The theory of relativity plays a central role in modern cosmology and its conception of space and time. A difference from its predecessors is that it conceives space and time not as distinct dimensions but as a single four-dimensional manifold called spacetime. This can be seen in special relativity in relation to the Minkowski metric, which includes both spatial and temporal components in its definition of distance. General relativity goes one step further by integrating the concept of mass into the concept of spacetime as its curvature. Quantum cosmology uses a classical notion of spacetime and conceives the whole world as one big wave function expressing the probability of finding particles in a given location. The world-concept plays a role in many modern theories of modality, sometimes in the form of possible worlds. A possible world is a complete and consistent way how things could have been. The actual world is a possible world since the way things are is a way things could have been. There are many other ways things could have been besides how they actually are. For example, Hillary Clinton did not win the 2016 US election, but she could have won. So there is a possible world in which she did. There is a vast number of possible worlds, one corresponding to each such difference, no matter how small or big, as long as no outright contradictions are introduced this way. Possible worlds are often conceived as abstract objects, for example, in terms of non-obtaining states of affairs or as maximally consistent sets of propositions. On such a view, they can even be seen as belonging to the actual world. Another way to conceive possible worlds, made famous by David Lewis, is as concrete entities. On this conception, there is no important difference between the actual world and possible worlds: both are conceived as concrete, inclusive and spatiotemporally connected. The only difference is that the actual world is the world we live in, while other possible worlds are not inhabited by us but by our counterparts. Everything within a world is spatiotemporally connected to everything else but the different worlds do not share a common spacetime: They are spatiotemporally isolated from each other. This is what makes them separate worlds. It has been suggested that, besides possible worlds, there are also impossible worlds. Possible worlds are ways things could have been, so impossible worlds are ways things could not have been. Such worlds involve a contradiction, like a world in which Hillary Clinton both won and lost the 2016 US election. Both possible and impossible worlds have in common the idea that they are totalities of their constituents. Within phenomenology, worlds are defined in terms of horizons of experiences. When we perceive an object, like a house, we do not just experience this object at the center of our attention but also various other objects surrounding it, given in the periphery. The term "horizon" refers to these co-given objects, which are usually experienced only in a vague, indeterminate manner. The perception of a house involves various horizons, corresponding to the neighborhood, the city, the country, the Earth, etc. In this context, the world is the biggest horizon or the "horizon of all horizons". It is common among phenomenologists to understand the world not just as a spatiotemporal collection of objects but as additionally incorporating various other relations between these objects. These relations include, for example, indication-relations that help us anticipate one object given the appearances of another object and means-end-relations or functional involvements relevant for practical concerns. In philosophy of mind, the term "world" is commonly used in contrast to the term "mind" as that which is represented by the mind. This is sometimes expressed by stating that there is a gap between mind and world and that this gap needs to be overcome for representation to be successful. One problem in philosophy of mind is to explain how the mind is able to bridge this gap and to enter into genuine mind-world-relations, for example, in the form of perception, knowledge or action. This is necessary for the world to be able to rationally constrain the activity of the mind. According to a realist position, the world is something distinct and independent from the mind. Idealists conceive of the world as partially or fully determined by the mind. Immanuel Kant's transcendental idealism, for example, posits that the spatiotemporal structure of the world is imposed by the mind on reality but lacks independent existence otherwise. A more radical idealist conception of the world can be found in Berkeley's subjective idealism, which holds that the world as a whole, including all everyday objects like tables, cats, trees and ourselves, "consists of nothing but minds and ideas". Different theological positions hold different conceptions of the world based on its relation to God. Classical theism states that God is wholly distinct from the world. But the world depends for its existence on God, both because God created the world and because He maintains or conserves it. This is sometimes understood in analogy to how humans create and conserve ideas in their imagination, with the difference being that the divine mind is vastly more powerful. On such a view, God has absolute, ultimate reality in contrast to the lower ontological status ascribed to the world. God's involvement in the world is often understood along the lines of a personal, benevolent God who looks after and guides His creation. Deists agree with theists that God created the world but deny any subsequent, personal involvement in it. Pantheists reject the separation between God and world. Instead, they claim that the two are identical. This means that there is nothing to the world that does not belong to God and that there is nothing to God beyond what is found in the world. Panentheism constitutes a middle ground between theism and pantheism. Against theism, it holds that God and the world are interrelated and depend on each other. Against pantheism, it holds that there is no outright identity between the two. History of philosophy In philosophy, the term world has several possible meanings. In some contexts, it refers to everything that makes up reality or the physical universe. In others, it can mean have a specific ontological sense (see world disclosure). While clarifying the concept of world has arguably always been among the basic tasks of Western philosophy, this theme appears to have been raised explicitly only at the start of the twentieth century, Plato is well known for his theory of forms, which posits the existence of two different worlds: the sensible world and the intelligible world. The sensible world is the world we live in, filled with changing physical things we can see, touch and interact with. The intelligible world is the world of invisible, eternal, changeless forms like goodness, beauty, unity and sameness. Plato ascribes a lower ontological status to the sensible world, which only imitates the world of forms. This is due to the fact that physical things exist only to the extent that they participate in the forms that characterize them, while the forms themselves have an independent manner of existence. In this sense, the sensible world is a mere replication of the perfect exemplars found in the world of forms: it never lives up to the original. In the allegory of the cave, Plato compares the physical things we are familiar with to mere shadows of the real things. But not knowing the difference, the prisoners in the cave mistake the shadows for the real things. Two definitions that were both put forward in the 1920s, however, suggest the range of available opinion. "The world is everything that is the case", wrote Ludwig Wittgenstein in his influential Tractatus Logico-Philosophicus, first published in 1921. Martin Heidegger, meanwhile, argued that "the surrounding world is different for each of us, and notwithstanding that we move about in a common world". "World" is one of the key terms in Eugen Fink's philosophy. He thinks that there is a misguided tendency in western philosophy to understand the world as one enormously big thing containing all the small everyday things we are familiar with. He sees this view as a form of forgetfulness of the world and tries to oppose it by what he calls the "cosmological difference": the difference between the world and the inner-worldly things it contains. On his view, the world is the totality of the inner-worldly things that transcends them. It is itself groundless but it provides a ground for things. It therefore cannot be identified with a mere container. Instead, the world gives appearance to inner-worldly things, it provides them with a place, a beginning and an end. One difficulty in investigating the world is that we never encounter it since it is not just one more thing that appears to us. This is why Fink uses the notion of play or playing to elucidate the nature of the world. He sees play as a symbol of the world that is both part of it and that represents it. Play usually comes with a form of imaginary play-world involving various things relevant to the play. But just like the play is more than the imaginary realities appearing in it so the world is more than the actual things appearing in it. The concept of worlds plays a central role in Nelson Goodman's late philosophy. He argues that we need to posit different worlds in order to account for the fact that there are different incompatible truths found in reality. Two truths are incompatible if they ascribe incompatible properties to the same thing. This happens, for example, when we assert both that the earth moves and that the earth is at rest. These incompatible truths correspond to two different ways of describing the world: heliocentrism and geocentrism. Goodman terms such descriptions "world versions". He holds a correspondence theory of truth: a world version is true if it corresponds to a world. Incompatible true world versions correspond to different worlds. It is common for theories of modality to posit the existence of a plurality of possible worlds. But Goodman's theory is different since it posits a plurality not of possible but of actual worlds. Such a position is in danger of involving a contradiction: there cannot be a plurality of actual worlds if worlds are defined as maximally inclusive wholes. This danger may be avoided by interpreting Goodman's world-concept not as maximally inclusive wholes in the absolute sense but in relation to its corresponding world-version: a world contains all and only the entities that its world-version describes. Religion Mythological cosmologies depict the world as centered on an axis mundi and delimited by a boundary such as a world ocean, a world serpent or similar. Hinduism constitutes a family of religious-philosophical views. These views present perspectives on the nature and role of the world. Samkhya philosophy, for example, is a metaphysical dualism that understands reality as comprising 2 parts: purusha and prakriti. The term "purusha" stands for the individual conscious self that each of "us" possesses. Prakriti, on the other hand, is the 1 world inhabited by all these selves. Samkhya understands this world as a world of matter governed by the law of cause and effect. The term "matter" is understood in a sense in this tradition including physical and mental aspects. This is reflected in the doctrine of tattvas, according to which prakriti is made up of 23 principles or elements of reality. These principles include physical elements, like water or earth, and mental aspects, like intelligence or sense-impressions. The relation between purusha and prakriti is conceived as 1 of observation: purusha is the conscious self aware of the world of prakriti and does not causally interact with it. A conception of the world is present in Advaita Vedanta, the monist school among the Vedanta schools. Unlike the realist position defended in Samkhya philosophy, Advaita Vedanta sees the world of multiplicity as an illusion, referred to as Maya. This illusion includes impression of existing as separate experiencing selfs called Jivas. Instead, Advaita Vedanta teaches that on the most fundamental level of reality, referred to as Brahman, there exists no plurality or difference. All there is is 1 all-encompassing self: Atman. Ignorance is seen as the source of this illusion, which results in bondage to the world of mere appearances. Liberation is possible in the course of overcoming this illusion by acquiring the knowledge of Brahman, according to Advaita Vedanta. Contemptus mundi is the name given to the belief that the world, in all its vanity, is nothing more than a futile attempt to hide from God by stifling our desire for the good and the holy. This view has been characterised as a "pastoral of fear" by historian Jean Delumeau. "The world, the flesh, and the devil" is a traditional division of the sources of temptation. Orbis Catholicus is a Latin phrase meaning "Catholic world", per the expression Urbi et Orbi, and refers to that area of Christendom under papal supremacy. In Islam, the term "dunya" is used for the world. Its meaning is derived from the root word "dana", a term for "near". It is associated with the temporal, sensory world and earthly concerns, i.e. with this world in contrast to the spiritual world. Religious teachings warn of a tendency to seek happiness in this world and advise a more ascetic lifestyle concerned with the afterlife. Other strands in Islam recommend a balanced approach. In Mandaean cosmology, the world or earthly realm is known as Tibil. It is separated from the World of Light (alma d-nhūra) above and the World of Darkness (alma d-hšuka) below by aether (ayar). Related terms and problems A worldview is a comprehensive representation of the world and our place in it. As a representation, it is a subjective perspective of the world and thereby different from the world it represents. All higher animals need to represent their environment in some way in order to navigate it. But it has been argued that only humans possess a representation encompassing enough to merit the term "worldview". Philosophers of worldviews commonly hold that the understanding of any object depends on a worldview constituting the background on which this understanding can take place. This may affect not just our intellectual understanding of the object in question but the experience of it in general. It is therefore impossible to assess one's worldview from a neutral perspective since this assessment already presupposes the worldview as its background. Some hold that each worldview is based on a single hypothesis that promises to solve all the problems of our existence we may encounter. On this interpretation, the term is closely associated to the worldviews given by different religions. Worldviews offer orientation not just in theoretical matters but also in practical matters. For this reason, they usually include answers to the question of the meaning of life and other evaluative components about what matters and how we should act. A worldview can be unique to one individual but worldviews are usually shared by many people within a certain culture or religion. The idea that there exist many different worlds is found in various fields. For example, theories of modality talk about a plurality of possible worlds and the many-worlds interpretation of quantum mechanics carries this reference even in its name. Talk of different worlds is also common in everyday language, for example, with reference to the world of music, the world of business, the world of football, the world of experience or the Asian world. But at the same time, worlds are usually defined as all-inclusive totalities. This seems to contradict the very idea of a plurality of worlds since if a world is total and all-inclusive then it cannot have anything outside itself. Understood this way, a world can neither have other worlds besides itself or be part of something bigger. One way to resolve this paradox while holding onto the notion of a plurality of worlds is to restrict the sense in which worlds are totalities. On this view, worlds are not totalities in an absolute sense. This might be even understood in the sense that, strictly speaking, there are no worlds at all. Another approach understands worlds in a schematic sense: as context-dependent expressions that stand for the current domain of discourse. So in the expression "Around the World in Eighty Days", the term "world" refers to the earth while in the colonial expression "the New World" it refers to the landmass of North and South America. Cosmogony is the field that studies the origin or creation of the world. This includes both scientific cosmogony and creation myths found in various religions. The dominant theory in scientific cosmogony is the Big Bang theory, according to which both space, time and matter have their origin in one initial singularity occurring about 13.8 billion years ago. This singularity was followed by an expansion that allowed the universe to sufficiently cool down for the formation of subatomic particles and later atoms. These initial elements formed giant clouds, which would then coalesce into stars and galaxies. Non-scientific creation myths are found in many cultures and are often enacted in rituals expressing their symbolic meaning. They can be categorized concerning their contents. Types often found include creation from nothing, from chaos or from a cosmic egg. Eschatology refers to the science or doctrine of the last things or of the end of the world. It is traditionally associated with religion, specifically with the Abrahamic religions. In this form, it may include teachings both of the end of each individual human life and of the end of the world as a whole. But it has been applied to other fields as well, for example, in the form of physical eschatology, which includes scientifically based speculations about the far future of the universe. According to some models, there will be a Big Crunch in which the whole universe collapses back into a singularity, possibly resulting in a second Big Bang afterward. But current astronomical evidence seems to suggest that our universe will continue to expand indefinitely. World history studies the world from a historical perspective. Unlike other approaches to history, it employs a global viewpoint. It deals less with individual nations and civilizations, which it usually approaches at a high level of abstraction. Instead, it concentrates on wider regions and zones of interaction, often interested in how people, goods and ideas move from one region to another. It includes comparisons of different societies and civilizations as well as considering wide-ranging developments with a long-term global impact like the process of industrialization. Contemporary world history is dominated by three main research paradigms determining the periodization into different epochs. One is based on productive relations between humans and nature. The two most important changes in history in this respect were the introduction of agriculture and husbandry concerning the production of food, which started around 10,000 to 8,000 BCE and is sometimes termed the Neolithic Revolution, and the Industrial Revolution, which started around 1760 CE and involved the transition from manual to industrial manufacturing. Another paradigm, focusing on culture and religion instead, is based on Karl Jaspers' theories about the Axial Age, a time in which various new forms of religious and philosophical thoughts appeared in several separate parts of the world around the time between 800 and 200 BCE. A third periodization is based on the relations between civilizations and societies. According to this paradigm, history can be divided into three periods in relation to the dominant region in the world: Middle Eastern dominance before 500 BCE, Eurasian cultural balance until 1500 CE and Western dominance since 1500 CE. Big History employs an even wider framework than world history by putting human history into the context of the history of the universe as a whole. It starts with the Big Bang and traces the formation of galaxies, the Solar System, the Earth, its geological eras, the evolution of life and humans until the present day. World politics, also referred to as global politics or international relations, is the discipline of political science studying issues of interest to the world that transcend nations and continents. It aims to explain complex patterns found in the social world that are often related to the pursuit of power, order and justice, usually in the context of globalization. It focuses not just on the relations between nation-states but also considers other transnational actors, like multinational corporations, terrorist groups, or non-governmental organizations. For example, it tries to explain events such as the September 11 attacks, the 2003 invasion of Iraq or the 2008 financial crisis. Various theories have been proposed in order to deal with the complexity involved in formulating such explanations. These theories are sometimes divided into realism, liberalism and constructivism. Realists see nation-states as the main actors in world politics. They constitute an anarchical international system without any overarching power to control their behavior. They are seen as sovereign agents that, determined by human nature, act according to their national self-interest. Military force may play an important role in the ensuing struggle for power between states, but diplomacy and cooperation are also key mechanisms for nations to achieve their goals. Liberalists acknowledge the importance of states but they also emphasize the role of transnational actors, like the United Nations or the World Trade Organization. They see humans as perfectible and stress the role of democracy in this process. The emergent order in world politics, on this perspective, is more complex than a mere balance of power since more different agents and interests are involved in its production. Constructivism ascribes more importance to the agency of individual humans than realism and liberalism. It understands the social world as a construction of the people living in it. This leads to an emphasis on the possibility of change. If the international system is an anarchy of nation-states, as the realists hold, then this is only so because we made it this way and may change since this is not prefigured by human nature, according to the constructivists. See also References External links Africa Antarctica Asia Australia Europe North America South America Afro-Eurasia Americas Eurasia Oceania |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Video_game_console] | [TOKENS: 6651] |
Contents Video game console A video game console is an electronic device that outputs a video signal or image to display a video game that can typically be played with a game controller. These may be home consoles, which are generally placed in a permanent location connected to a television or other display devices and controlled with a separate game controller, or handheld consoles, which include their own display unit and controller functions built into the unit and which can be played anywhere. Hybrid consoles combine elements of both home and handheld consoles. Video game consoles are a specialized form of home computer geared towards video game playing, designed with affordability and accessibility to the general public in mind, but lacking in raw computing power and customization. Simplicity is achieved in part through the use of game cartridges or other simplified methods of distribution, easing the effort of launching a game. However, this leads to ubiquitous proprietary formats that create competition for market share. More recent consoles have shown further confluence with home computers, making it easy for developers to release games on multiple platforms. Further, modern consoles can serve as replacements for media players with capabilities to play films and music from optical media or streaming media services. Video game consoles are usually sold on a five–seven-year cycle called a generation, with consoles made with similar technical capabilities or made around the same time period grouped into one generation. The industry has developed a razor and blades model: manufacturers often sell consoles at low prices, sometimes at a loss, while primarily making a profit from the licensing fees for each game sold. Planned obsolescence then draws consumers into buying the next console generation. While numerous manufacturers have come and gone in the history of the console market, there have always been two or three dominant leaders in the market, with the current market led by Sony (with their PlayStation brand), Microsoft (with their Xbox brand), and Nintendo (currently producing the Switch 2 and Switch consoles). Previous console developers include Sega, Atari, Coleco, Mattel, NEC, SNK, Magnavox, Philips and Panasonic. History The first video game consoles were produced in the early 1970s. Ralph H. Baer devised the concept of playing simple, spot-based games on a television screen in 1966, which later became the basis of the Magnavox Odyssey in 1972. Inspired by the table tennis game on the Odyssey, Nolan Bushnell, Ted Dabney, and Allan Alcorn at Atari, Inc. developed the first successful arcade game, Pong, and looked to develop that into a home version, which was released in 1975. The first consoles were capable of playing only a very limited number of games built into the hardware. Programmable consoles using swappable ROM cartridges were introduced with the Fairchild Channel F in 1976, though popularized with the Atari 2600 released in 1977. Handheld consoles emerged from technology improvements in handheld electronic games as these shifted from mechanical to electronic/digital logic, and away from light-emitting diode (LED) indicators to liquid-crystal displays (LCD) that resembled video screens more closely. Early examples include the Microvision in 1979 and Game & Watch in 1980, and the concept was fully realized by the Game Boy in 1989. Both home and handheld consoles have become more advanced following global changes in technology. These technological shifts include improved electronic and computer chip manufacturing to increase computational power at lower costs and size, the introduction of 3D graphics and hardware-based graphic processors for real-time rendering, digital communications such as the Internet, wireless networking and Bluetooth, and larger and denser media formats as well as digital distribution. Following the same type of Moore's law progression, home consoles are grouped into generations; each lasting approximately five years. Consoles within each generation share similar specifications and features, such as processor word size. While no one grouping of consoles by generation is universally accepted, a breakdown of generations, showing representative consoles of each, is shown below. Form factor Home video game consoles are meant to be connected to a television or other type of monitor, with power supplied through an outlet. This requires the unit to be used in a fixed location, typically at home in one's living room. Separate game controllers, connected through wired or wireless connections, are used to provide input to the game. Early examples include the Atari 2600, the Nintendo Entertainment System, and the Sega Genesis; newer examples include the Wii U, the PlayStation 5, and the Xbox Series X. A microconsole is a home video game console that is typically powered by low-cost computing hardware, making the console lower-priced compared to other home consoles on the market. The majority of microconsoles, with a few exceptions such as the PlayStation TV and OnLive Game System, are Android-based digital media players that are bundled with gamepads and marketed as gaming devices. Such microconsoles can be connected to the television to play video games downloaded from an application store such as Google Play. Handheld game consoles are devices that typically include a built-in screen and game controller in their case, and contain a rechargeable battery or battery compartment. This allows the unit to be carried around and played anywhere, in contrast to a home game console. Examples include the Game Boy, the PlayStation Portable, and the Nintendo 3DS. Hybrid video game consoles are devices that combine the use as either a handheld or a home console. In addition to handheld consoles, they generally also have either a wired connection or docking station that connects the console unit to a television screen. The consoles can be used as separate controllers only and be played during wired battery charging. Handhelds include the Sega Nomad, PlayStation Portable, and Nvidia's Shield Portable and Shield Tablet, or home consoles such as the Wii U have hybrid features. With the Nintendo Switch, a console with detachable controllers called Joy-Con Set, the hybrid term became popular and is considered by some to be the first truly hybrid console. Functionality Most consoles are considered programmable consoles and have the means for the player to switch between different games. Traditionally, this has been done by switching a physical game cartridge or game card or by using optical media. It is now common to download games through digital distribution and store them on internal or external digital storage devices. Some consoles are considered dedicated consoles, in which games available for the console are "baked" onto the hardware, either by being programmed via the circuitry or set in the read-only flash memory of the console. Thus, the console's game library cannot be added to or changed directly by the user. The user can typically switch between games on dedicated consoles using hardware switches on the console, or through in-game menus. Dedicated consoles were common in the first generation of home consoles, such as the Magnavox Odyssey and the home console version of Pong, and more recently have been used for retro style consoles such as the NES Classic Edition and Sega Genesis Mini. Dedicated consoles were very popular in the first generation until they were gradually replaced by second generation that use ROM cartridges. The fourth generation gradually merged with optical media. During the later part of video game history, there have been specialized consoles using computing components to offer multiple games to players. Most of these plug directly into one's television, and thus are often called plug-and-play consoles. Most of them are also considered dedicated consoles since it is generally impossible to access the computing components by an average consumer, though tech-savvy consumers often have found ways to hack the console to install additional functionality, voiding the manufacturer's warranty. Plug-and-play consoles usually come with the console unit itself, one or more controllers, and the required components for power and video hookup. Many recent plug-and-play releases have been for distributing a number of retro games for a specific console platform. Examples of these include the Atari Flashback series, the NES Classic Edition, Sega Genesis Mini and also handheld retro consoles such as the Nintendo Game & Watch color screen series. Components Early console hardware was designed as customized printed circuit boards (PCB)s, selecting existing integrated circuit chips that performed known functions, or programmable chips like erasable programmable read-only memory (EPROM) chips that could perform certain functions. Persistent computer memory was expensive, so dedicated consoles were generally limited to the use of processor registers for storage of the state of a game, thus limiting the complexities of such titles.[non sequitur (registers are not persistent memory)] Pong in both its arcade and home format, had a handful of logic and calculation chips that used the current input of the players' paddles and registers storing the ball's position to update the game's state and send it to the display device. Even with more advanced integrated circuits (IC)s of the time, designers were limited to what could be done through the electrical process rather than through programming as normally associated with video game development. Improvements in console hardware followed with improvements in microprocessor technology and semiconductor device fabrication. Manufacturing processes have been able to reduce the feature size on chips (typically measured in nanometers), allowing more transistors and other components to fit on a chip, and at the same time increasing the circuit speeds and the potential frequency the chip can run at, as well as reducing thermal dissipation. Chips were able to be made on larger dies, further increasing the number of features and effective processing power. Random-access memory became more practical with the higher density of transistors per chip, but to address the correct blocks of memory, processors needed to be updated to use larger word sizes and allot for larger bandwidth in chip communications. All these improvements did increase the cost of manufacturing, but at a rate far less than the gains in overall processing power, which helped to make home computers and consoles inexpensive for the consumer, all related to Moore's law of technological improvements. For the consoles of the 1980s to 1990s, these improvements were evident in the marketing in the late 1980s to 1990s during the "bit wars", where console manufacturers had focused on their console's processor's word size as a selling point. Consoles since the 2000s are more similar to personal computers, building in memory, storage features, and networking capabilities to avoid the limitations of the past. The confluence with personal computers eased software development for both computer and console games, allowing developers to target both platforms. However, consoles differ from computers as most of the hardware components are preselected and customized between the console manufacturer and hardware component provider to assure a consistent performance target for developers. Whereas personal computer motherboards are designed with the needs for allowing consumers to add their desired selection of hardware components, the fixed set of hardware for consoles enables console manufacturers to optimize the size and design of the motherboard and hardware, often integrating key hardware components into the motherboard circuitry itself. Often, multiple components, such as the central processing unit and graphics processing unit, can be combined into a single chip, otherwise known as a system on a chip (SoC), which is a further reduction in size and cost. In addition, consoles tend to focus on components that give the unit high game performance, such as the CPU and GPU, and as a tradeoff to keep their prices in expected ranges, use less memory and storage space compared to typical personal computers. In comparison to the early years of the industry, where most consoles were made directly by the company selling the console, many consoles of today are generally constructed through a value chain that includes component suppliers, such as AMD and NVidia for CPU and GPU functions, and contract manufacturers including electronics manufacturing services, factories which assemble those components into the final consoles such as Foxconn and Flextronics. Completed consoles are then usually tested, distributed, and repaired by the company itself. Microsoft and Nintendo both use this approach to their consoles, while Sony maintains all production in-house with the exception of their component suppliers. Some of the commons elements that can be found within console hardware include: All game consoles require player input through a game controller to provide a method to move the player character in a specific direction and a variation of buttons to perform other in-game actions such as jumping or interacting with the game world. Though controllers have become more featured over the years, they still provide less control over a game compared to personal computers or mobile gaming. The type of controller available to a game can fundamentally change the style of how a console game will or can be played. However, this has also inspired changes in game design to create games that accommodate for the comparatively limited controls available on consoles. Controllers have come in a variety of styles over the history of consoles. Some common types include: Numerous other controller types exist, including those that support motion controls, touchscreen support on handhelds and some consoles, and specialized controllers for specific types of games, such as racing wheels for racing games, light guns for shooting games, and musical instrument controllers for rhythm games. Some newer consoles also include optional support for a mouse and keyboard devices. Some older consoles such as 1988 Sega Genesis aka Mega Drive and 1993 3DO Interactive Multiplayer, supported optional mice, both with special mice made for them, but the 3DO mouse like that console was a flop, and the mouse for the Sega had very limited game support. The Sega also supported the optional Menacer, a wireless infrared light gun, and such were at one point popular for games. It also support BatterUP, a baseball bat-shaped controller. A controller may be attached through a wired connection onto the console itself, or in some unique cases like the Famicom hardwired to the console, or with a wireless connection. Controllers require power, either provided by the console via the wired connection, or from batteries or a rechargeable battery pack for wireless connections. Controllers are nominally built into a handheld unit, though some newer ones allow for separate wireless controllers to also be used. While the first game consoles were dedicated game systems, with the games programmed into the console's hardware, the Fairchild Channel F introduced the ability to store games in a form separate from the console's internal circuitry, thus allowing the consumer to purchase new games to play on the system. Since the Channel F, nearly all game consoles have featured the ability to purchase and swap games through some form, though those forms have changed with improvements in technology. While magnetic storage, such as tape drives and floppy disks, had been popular for software distribution with early personal computers in the 1980s and 1990s, this format did not see much use in console systems. There were some attempts, such as the Bally Astrocade and APF-M1000 using tape drives, as well as the Disk System for the Nintendo Famicom, and the Nintendo 64DD for the Nintendo 64, but these had limited applications, as magnetic media was more fragile and volatile than game cartridges. In addition to built-in internal storage, newer consoles often give the consumer the ability to use external storage media to save game data, downloaded games, or other media files from the console. Early iterations of external storage were achieved through the use of flash-based memory cards, first used by the Neo Geo but popularized with the PlayStation. Nintendo continues to support this approach with extending the storage capabilities of the 3DS and Switch, standardizing on the current SD card format. As consoles began incorporating the use of USB ports, support for USB external hard drives was also added, such as with the Xbox 360. With Internet-enabled consoles, console manufacturers offer both free and paid-subscription services that provide value-added services atop the basic functions of the console. Free services generally offer user identity services and access to a digital storefront, while paid services allow players to play online games, interact with other uses through social networking, use cloud saves for supported games, and gain access to free titles on a rotating basis. Examples of such services include the Xbox network, PlayStation Network, and Nintendo Switch Online. Certain consoles saw various add-ons or accessories that were designed to attach to the existing console to extend its functionality. The best example of this was through the various CD-ROM add-ons for consoles of the fourth generation such as the TurboGrafx CD, Atari Jaguar CD, and the Sega CD. Other examples of add-ons include the 32X for the Sega Genesis intended to allow owners of the aging console to play newer games but has several technical faults, and the Game Boy Player for the GameCube to allow it to play Game Boy games. Consumers can often purchase a range of accessories for consoles outside of the above categories. These can include: Game development The core development process for a console game is very similar to its counterparts and primarily differs in the high level concept due to demographics and the technical back-end. Consoles developers will usually make a development kit available to game developers which they can use to test their games on with more ease than a consumer model. Early console games were commonly created by a single person and could be changed in a short amount of time due to the simplicity of the games at the time. As technology has improved, the development time, complexity and cost of console games has increased dramatically, to where the size of a team for an eighth generation game can number in the hundreds. Similarly, the programming languages used in video game development has changed over time with early games being developed primarily in assembly. As time went on developers had more choice on what they could use based on the availability on the console but some languages became more popular than others. In comparison to PC and mobile games, console game developers must consider the limitations of the hardware their game is being developed for, as it is unlikely to have any major changes between the development phase and release. PC and mobile technology progresses quickly and there are many different configurations of their hardware and software. This is beneficial at the start of a console's life cycle, as the technology will be cutting edge, but as the console ages, developers are forced to work with ageing hardware until the next generation of consoles is released. Earlier consoles games could be developed to take advantage of the fixed limitations of the consoles they were developed for, such as the MegaDrive's capability of fast scrolling influencing design decisions made for Sonic the Hedgehog. Console or game development kits are specialized hardware units that typically include the same components as the console and additional chips and components to allow the unit to be connected to a computer or other monitoring device for debugging purposes. A console manufacturer will make the console's dev kit available to registered developers months ahead of the console's planned launch to give developers time to prepare their games for the new system. These initial kits will usually be offered under special confidentiality clauses to protect trade secrets of the console's design, and will be sold at a high cost to the developer as part of keeping this confidentiality. Newer consoles that share features in common with personal computers may no longer use specialized dev kits, though developers are still expected to register and purchase access to software development kits from the manufacturer. For example, any consumer Xbox One can be used for game development after paying a fee to Microsoft to register one intent to do so. Since the release of the Nintendo Famicom / Nintendo Entertainment System, most video game console manufacturers employ a strict licensing scheme that limit what games can be developed for it. Developers and their publishers must pay a fee, typically based on royalty per unit sold, back to the manufacturer. The cost varies by manufacturer but was estimated to be about US$3−10 per unit in 2012. With additional fees, such as branding rights, this has generally worked out to be an industry-wide 30% royalty rate paid to the console manufacturer for every game sold. This is in addition to the cost of acquiring the dev kit to develop for the system. The licensing fee may be collected in a few different ways. In the case of Nintendo, the company generally has controlled the production of game cartridges with its lockout chips and optical media for its systems, and thus charges the developer or publisher for each copy it makes as an upfront fee. This also allows Nintendo to review the game's content prior to release and veto games it does not believe appropriate to include on its system. This had led to over 700 unlicensed games for the NES, and numerous others on other Nintendo cartridge-based systems that had found ways to bypass the hardware lockout chips and sell without paying any royalties to Nintendo, such as by Atari in its subsidiary company Tengen. This licensing approach was similarly used by most other cartridge-based console manufacturers using lockout chip technology. With optical media, where the console manufacturer may not have direct control on the production of the media, the developer or publisher typically must establish a licensing agreement to gain access to the console's proprietary storage format for the media as well as to use the console and manufacturer's logos and branding for the game's packaging, paid back through royalties on sales. In the transition to digital distribution, where now the console manufacturer runs digital storefronts for games, license fees apply to registering a game for distribution on the storefront – again gaining access to the console's branding and logo – with the manufacturer taking its cut of each sale as its royalty. In both cases, this still gives console manufacturers the ability to review and reject games it believes unsuitable for the system and deny licensing rights. With the rise of indie game development, the major console manufacturers have all developed entry level routes for these smaller developers to be able to publish onto consoles at far lower costs and reduced royalty rates. Programs like Microsoft's ID@Xbox give developers most of the needed tools for free after validating the small development size and needs of the team. Similar licensing concepts apply for third-party accessory manufacturers. Emulation and backward compatibility Consoles, like most consumer electronic devices, have limited lifespans. There is great interest in preservation of older console hardware for archival and historical purposes, as games from older consoles, as well as arcade and personal computers, remain of interest. Computer programmers and hackers have developed emulators that can be run on personal computers or other consoles that simulate the hardware of older consoles that allow games from that console to be run. The development of software emulators of console hardware is established to be legal, but there are unanswered legal questions surrounding copyrights, including acquiring a console's firmware and copies of a game's ROM image, which laws such as the United States' Digital Millennium Copyright Act make illegal save for certain archival purposes. Even though emulation itself is legal, Nintendo is recognized to be highly protective of any attempts to emulate its systems and has taken early legal actions to shut down such projects. To help support older games and console transitions, manufacturers started to support backward compatibility on consoles in the same family. Sony was the first to do this on a home console with the PlayStation 2 which was able to play original PlayStation content, and subsequently became a sought-after feature across many consoles that followed. Backward compatibility functionality has included direct support for previous console games on the newer consoles such as within the Xbox console family, the distribution of emulated games such as Nintendo's Virtual Console, or using cloud gaming services for these older games as with the PlayStation Now service. Market Consoles may be shipped in a variety of configurations, but typically will include one base configuration that include the console, one controller, and sometimes a pack-in game. Manufacturers may offer alternate stock keeping unit (SKUs) options that include additional controllers and accessories or different pack-in games. Special console editions may feature unique cases or faceplates with art dedicated to a specific video game or series and are bundled with that game as a special incentive for its fans. Pack-in games are typically first-party games, often featuring the console's primary mascot characters. The more recent console generations have also seen multiple versions of the same base console system either offered at launch or presented as a mid-generation refresh. In some cases, these simply replace some parts of the hardware with cheaper or more efficient parts, or otherwise streamline the console's design for production going forward; the PlayStation 3 underwent several such hardware refreshes during its lifetime due to technological improvements such as significant reduction of the process node size for the CPU and GPU. In these cases, the hardware revision model will be marked on packaging so that consumers can verify which version they are acquiring. In other cases, the hardware changes create multiple lines within the same console family. The base console unit in all revisions share fundamental hardware, but options like internal storage space and RAM size may be different. Those systems with more storage and RAM would be marked as a higher performance variant available at a higher cost, while the original unit would remain as a budget option. For example, within the Xbox One family, Microsoft released the mid-generation Xbox One X as a higher performance console, the Xbox One S as the lower-cost base console, and a special Xbox One S All-Digital Edition revision that removed the optical drive on the basis that users could download all games digitally, offered at even a lower cost than the Xbox One S. In these cases, developers can often optimize games to work better on the higher-performance console with patches to the retail version of the game. In the case of the Nintendo 3DS, the New Nintendo 3DS, featured upgraded memory and processors, with new games that could only be run on the upgraded units and cannot be run on an older base unit. There have also been a number of "slimmed-down" console options with significantly reduced hardware components that significantly reduced the price they could sell the console to the consumer, but either leaving certain features off the console, such as the Wii Mini that lacked any online components compared to the Wii, or that required the consumer to purchase additional accessories and wiring if they did not already own it, such as the New-Style NES that was not bundled with the required RF hardware to connect to a television. Consoles when originally launched in the 1970s and 1980s were about US$200−300, and with the introduction of the ROM cartridge, each game averaged about US$30−40. Over time the launch price of base consoles units has generally risen to about US$400−500, with the average game costing US$60. Exceptionally, the period of transition from ROM cartridges to optical media in the early 1990s saw several consoles with high price points exceeding US$400 and going as high as US$700. Resultingly, sales of these first optical media consoles were generally poor. When adjusted for inflation, the price of consoles has generally followed a downward trend, from US$800−1,000 from the early generations down to US$500−600 for current consoles. This is typical for any computer technology, with the improvements in computing performance and capabilities outpacing the additional costs to achieve those gains. Further, within the United States, the price of consoles has generally remained consistent, being within 0.8% to 1% of the median household income, based on the United States Census data for the console's launch year. Since the Nintendo Entertainment System, console pricing has stabilized on the razorblade model, where the consoles are sold at little to no profit for the manufacturer, but they gain revenue from each game sold due to console licensing fees and other value-added services around the console (such as Xbox Live). Console manufacturers have even been known to take losses on the sale of consoles at the start of a console's launch with expectation to recover with revenue sharing and later price recovery on the console as they switch to less expensive components and manufacturing processes without changing the retail price. Consoles have been generally designed to have a five-year product lifetime, though manufacturers have considered their entries in the more recent generations to have longer lifetimes of seven to potentially ten years. The competition within the video game console market as subset of the video game industry is an area of interest to economics with its relatively modern history, its rapid growth to rival that of the film industry, and frequent changes compared to other sectors. Effects of unregulated competition on the market were twice seen early in the industry. The industry had its first crash in 1977 following the release of the Magnavox Odyssey, Atari's home versions of Pong and the Coleco Telstar, which led other third-party manufacturers, using inexpensive General Instruments processor chips, to make their own home consoles which flooded the market by 1977.: 81–89 The video game crash of 1983 was fueled by multiple factors including competition from lower-cost personal computers, but unregulated competition was also a factor, as numerous third-party game developers, attempting to follow on the success of Activision in developing third-party games for the Atari 2600 and Intellivision, flooded the market with poor quality games, and made it difficult for even quality games to sell. Nintendo implemented a lockout chip, the Checking Integrated Circuit, on releasing the Nintendo Entertainment System in Western territories, as a means to control which games were published for the console. As part of their licensing agreements, Nintendo further prevented developers from releasing the same game on a different console for a period of two years. This served as one of the first means of securing console exclusivity for games that existed beyond technical limitation of console development. The Nintendo Entertainment System also brought the concept of a video game mascot as the representation of a console system as a means to sell and promote the unit, and for the NES was Mario. The use of mascots in businesses had been a tradition in Japan, and this had already proven successful in arcade games like Pac-Man. Mario was used to serve as an identity for the NES as a humor-filled, playful console. Mario caught on quickly when the NES released in the West, and when the next generation of consoles arrived, other manufacturers pushed their own mascots to the forefront of their marketing, most notably Sega with the use of Sonic the Hedgehog. The Nintendo and Sega rivalry that involved their mascot's flagship games served as part of the fourth console generation's "console wars". Since then, manufacturers have typically positioned their mascot and other first-party games as key titles in console bundles used to drive sales of consoles at launch or at key sales periods such as near Christmas. Another type of competitive edge used by console manufacturers around the same time was the notion of "bits" or the size of the word used by the main CPU. The TurboGrafx-16 was the first console to push on its bit-size, advertising itself as a "16-bit" console, though this only referred to part of its architecture while its CPU was still an 8-bit unit. Despite this, manufacturers found consumers became fixated on the notion of bits as a console selling point, and over the fourth, fifth and sixth generation, these "bit wars" played heavily into console advertising. The use of bits waned as CPU architectures no longer needed to increase their word size and instead had other means to improve performance such as through multicore CPUs. Generally, increased console numbers gives rise to more consumer options and better competition, but the exclusivity of titles made the choice of console for consumers an "all-or-nothing" decision for most. Further, with the number of available consoles growing with the fifth and sixth generations, game developers became pressured to which systems to focus on, and ultimately narrowed their target choice of platforms to those that were the best-selling. This cased a contraction in the market, with major players like Sega leaving the hardware business after the Dreamcast but continuing in the software area. Effectively, each console generation was shown to have two or three dominant players. Competition in the console market in the 2010s and 2020s is considered an oligopoly between three main manufacturers: Nintendo, Sony, and Microsoft. The three use a combination of first-party games exclusive to their console and negotiate exclusive agreements with third-party developers to have their games be exclusive for at least an initial period of time to drive consumers to their console. They also worked with CPU and GPU manufacturers to tune and customize hardware for computers to make it more amenable and effective for video games, leading to lower-cost hardware needed for video game consoles. Finally, console manufacturers also work with retailers to help with promotion of consoles, games, and accessories. While there is little difference in pricing on the console hardware from the manufacturer's suggested retail price for the retailer to profit from, these details with the manufacturers can secure better profits on sales of game and accessory bundles for premier product placement. These all form network effects, with each manufacturer seeking to maximize the size of their network of partners to increase their overall position in the competition. Of the three, Microsoft and Sony, both with their own hardware manufacturing capabilities, remain at a leading edge approach, attempting to gain a first-mover advantage over the other with adaption of new console technology. Nintendo is more reliant on its suppliers and thus instead of trying to compete feature for feature with Microsoft and Sony, had instead taken a "blue ocean" strategy since the Nintendo DS and Wii. See also References Further reading External links |
======================================== |
[SOURCE: https://www.bbc.com/news/articles/czd831elpz5o] | [TOKENS: 1693] |
Call of Duty advert banned for trivialising sexual violence3 days agoShareSaveLaura CressTechnology reporterShareSaveTreyarch & Raven Software & ActivisionThe advert for Call of Duty: Black Ops 7 ran in November 2025An advert for a Call of Duty game has been banned by the UK's advertising regulator for trivialising sexual violence.The commercial for Call of Duty: Black Ops 7 featured fake officers at an airport security check - as the real ones were too busy playing the game.Viewers complained the video, which included a man being told to strip down while an officer put on gloves and said "time for the puppet show", was "irresponsible and offensive".Gaming company Activision Blizzard UK Ltd said the ad promoted the 18-rated video game and was therefore targeted at adult audiences only, who had a higher tolerance for irreverent or exaggerated humour.The spot ran on YouTube and video on demand services, including ITV and Channel 5, in November 2025.It was one of several used to promote the latest game in the Call of Duty series.The campaign featured the idea that replacements had to step into different job roles, because the original staff were playing Call of Duty: Black Ops 7 instead.The ad in question featured an airport security setting, with one actor explaining they were the "replacers".A man was then told he had been randomly selected "to be manhandled" before being told to remove his clothes down to "everything but the shoes", while the female officer put on a pair of gloves.The Advertising Standards Authority (ASA) received complaints from nine viewers who believed the ad trivialised sexual violence.NBC via Getty ImagesComedian Nikki Glazer played one of the "replacer" characters in the Call of Duty advertActivision Blizzard UK Ltd said the ad had been reviewed by Clearcast, which provides pre-clearance of TV advertising, and had been approved with an "ex-kids" timing restriction.It added it was not broadcast during or around children's programming or content likely to appeal to under-16s.The company claimed it depicted a deliberately implausible, parodic scenario that bore no resemblance to real airport security procedures.According to the firm, the ad in question did not sexualise the act of performing searches - and that the humour referred to discomfort rather than sex. It added that even if some viewers inferred innuendo, it did not contain explicit content or objectifying imagery.'Irresponsible and offensive'The ASA said the story included a non-consensual, invasive search of a man passing through airport security. However, it acknowledged the video did not include explicit imagery and the man remained clothed for its duration. But the watchdog noted the humour was "generated by the humiliation and implied threat of painful, non-consensual penetration of the man".The ASA concluded that the advert trivialised sexual violence and was therefore irresponsible and offensive.It therefore ruled the ad must not appear again in its current form.Two further complainants also questioned whether the ad encouraged or condoned drug use, due to a scene where the replacement officers picked up a prescription medication container and winked.This complaint was not upheld by the ASA.It is not the first time an advert for the video game series has been banned.In 2012 an advert for Call Of Duty: Modern Warfare 3 which showed armed men firing at a lorry was given a daytime ban by the ASA for scenes of violence and destruction which were "inappropriate" for young children.Call of Duty is back, and it's got a battle on its handsA live-action Call of Duty film is on the way Call of Duty maker defends gaming's impact on young menSign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.GamingAdvertising Standards AuthorityAdvertising Call of Duty advert banned for trivialising sexual violence An advert for a Call of Duty game has been banned by the UK's advertising regulator for trivialising sexual violence. The commercial for Call of Duty: Black Ops 7 featured fake officers at an airport security check - as the real ones were too busy playing the game. Viewers complained the video, which included a man being told to strip down while an officer put on gloves and said "time for the puppet show", was "irresponsible and offensive". Gaming company Activision Blizzard UK Ltd said the ad promoted the 18-rated video game and was therefore targeted at adult audiences only, who had a higher tolerance for irreverent or exaggerated humour. The spot ran on YouTube and video on demand services, including ITV and Channel 5, in November 2025. It was one of several used to promote the latest game in the Call of Duty series. The campaign featured the idea that replacements had to step into different job roles, because the original staff were playing Call of Duty: Black Ops 7 instead. The ad in question featured an airport security setting, with one actor explaining they were the "replacers". A man was then told he had been randomly selected "to be manhandled" before being told to remove his clothes down to "everything but the shoes", while the female officer put on a pair of gloves. The Advertising Standards Authority (ASA) received complaints from nine viewers who believed the ad trivialised sexual violence. Activision Blizzard UK Ltd said the ad had been reviewed by Clearcast, which provides pre-clearance of TV advertising, and had been approved with an "ex-kids" timing restriction. It added it was not broadcast during or around children's programming or content likely to appeal to under-16s. The company claimed it depicted a deliberately implausible, parodic scenario that bore no resemblance to real airport security procedures. According to the firm, the ad in question did not sexualise the act of performing searches - and that the humour referred to discomfort rather than sex. It added that even if some viewers inferred innuendo, it did not contain explicit content or objectifying imagery. 'Irresponsible and offensive' The ASA said the story included a non-consensual, invasive search of a man passing through airport security. However, it acknowledged the video did not include explicit imagery and the man remained clothed for its duration. But the watchdog noted the humour was "generated by the humiliation and implied threat of painful, non-consensual penetration of the man". The ASA concluded that the advert trivialised sexual violence and was therefore irresponsible and offensive. It therefore ruled the ad must not appear again in its current form. Two further complainants also questioned whether the ad encouraged or condoned drug use, due to a scene where the replacement officers picked up a prescription medication container and winked. This complaint was not upheld by the ASA. It is not the first time an advert for the video game series has been banned. In 2012 an advert for Call Of Duty: Modern Warfare 3 which showed armed men firing at a lorry was given a daytime ban by the ASA for scenes of violence and destruction which were "inappropriate" for young children. Call of Duty is back, and it's got a battle on its hands A live-action Call of Duty film is on the way Call of Duty maker defends gaming's impact on young men Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here. Is £70 becoming harder to justify? The rise of cheaper blockbuster games Blind gamer's two-hour live stream for charity 'I do not trust them' - top streamers left concerned by Discord age checks TfL advert banned for harmful racial stereotype The Facebook advert shows a black teenager verbally harassing a young girl alongside a white friend. Amazon's Ring ends deal with surveillance firm after backlash A Super Bowl advert had sparked new scrutiny of the smart doorbell company's privacy practices. Sunbed ads spreading harmful misinformation to young people Hundreds of TikTok, Instagram and Facebook ads made misleading claims about health benefits, BBC finds. Disney advert banned for showing 'disturbing' severed body Disney argued the severed figure in the advert was a robot and "visually distinct from a human". UK bans Coinbase ads implying crypto can ease cost of living concerns The ASA upheld complaints that Coinbase's adverts trivialised the risks of investing in cryptocurrency. Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. |
======================================== |
[SOURCE: https://www.bbc.com/news/articles/cge8nd5ve00o] | [TOKENS: 1690] |
Indian university faces backlash for claiming Chinese robodog as own at AI summit3 days agoShareSaveCherylann MollanMumbaiShareSaveGetty ImagesOnline users identified the machine as the Go2 model made by Chinese firm Unitree RoboticsAn Indian university has courted controversy at the AI summit in Delhi after an official claimed that a Chinese-made robotic dog was its own invention.The incident came to light after a professor from Galgotias University told state-run broadcaster DD News that the robot named "Orion" was "developed" at their Centre of Excellence. A video of her remarks went viral. Online users later identified the machine as the Go2 model made by Chinese firm Unitree Robotics, which is commercially available starting at about 200,000 rupees ($2,200; £1,600).In a statement on Wednesday, the university denied claiming it had built the robot and described the backlash as a "propaganda campaign"."We would like to clearly state that the robotic programming is part of our endeavour to make students learn AI programming and develop and deploy real world skills using globally available tools and resources, given developing AI talent is need of the hour," the university said.Neha Singh, the professor seen in the video, later told reporters her remarks had been misunderstood. "It might be that I could not convey well what I wanted to say, or you could not understand well what I wanted to say," she said.Social media users, however, accused the university of dishonesty. Reports said that following the backlash, the university was asked to vacate its stall at the summit. Faculty members said they had received no official communication to do so.But hours later, news agency Press Trust of India reported that electricity supply to the stall was cut off following the controversy.A BBC reporter at the summit said the lights were turned off at the booth and no staff from the university were around.The incident is being seen as an embarrassment for the organisers of the summit as the video had also been shared on IT Minister Ashwini Vaishnaw's official X account. The post has since been deleted.India's IT Secretary S Krishnan said the controversy should not "overshadow" the work put in by other participants at the summit."What happened should not affect the way people present or exhibit their work at such events. The idea is not to use an opportunity like this to become something else or create unnecessary noise."It is essential that a proper code of conduct is followed. There are other countries and other participants involved as well," he told reporters.The India AI Impact Summit, inaugurated by Prime Minister Narendra Modi at Bharat Mandapam on Monday, is being pitched by the government as a flagship gathering to position India as a global AI hub. Delegates from more than 100 countries, including several heads of governments, are attending, alongside industry leaders such as Sundar Pichai of Google.The five-day summit features policy discussions, startup showcases and closed-door meetings on AI governance, infrastructure and innovation. However, its opening day was overshadowed by complaints of overcrowding, long queues and confusion at the venue, prompting organisers to extend exhibition hours and tighten entry management. They say arrangements have since improved.BBC correspondent Vikas Pandey, who is at the summit, said the venue was "absolutely buzzing" on the third day, with thousands of people from different parts of India visiting stalls and soaking up the excitement. Officials say they hope the event and the conversations around it will help adoption of AI across the country.Follow BBC News India on Instagram, YouTube, X and Facebook.AsiaRoboticsArtificial intelligenceIndiaTechnology Indian university faces backlash for claiming Chinese robodog as own at AI summit An Indian university has courted controversy at the AI summit in Delhi after an official claimed that a Chinese-made robotic dog was its own invention. The incident came to light after a professor from Galgotias University told state-run broadcaster DD News that the robot named "Orion" was "developed" at their Centre of Excellence. A video of her remarks went viral. Online users later identified the machine as the Go2 model made by Chinese firm Unitree Robotics, which is commercially available starting at about 200,000 rupees ($2,200; £1,600). In a statement on Wednesday, the university denied claiming it had built the robot and described the backlash as a "propaganda campaign". "We would like to clearly state that the robotic programming is part of our endeavour to make students learn AI programming and develop and deploy real world skills using globally available tools and resources, given developing AI talent is need of the hour," the university said. Neha Singh, the professor seen in the video, later told reporters her remarks had been misunderstood. "It might be that I could not convey well what I wanted to say, or you could not understand well what I wanted to say," she said. Social media users, however, accused the university of dishonesty. Reports said that following the backlash, the university was asked to vacate its stall at the summit. Faculty members said they had received no official communication to do so. But hours later, news agency Press Trust of India reported that electricity supply to the stall was cut off following the controversy. A BBC reporter at the summit said the lights were turned off at the booth and no staff from the university were around. The incident is being seen as an embarrassment for the organisers of the summit as the video had also been shared on IT Minister Ashwini Vaishnaw's official X account. The post has since been deleted. India's IT Secretary S Krishnan said the controversy should not "overshadow" the work put in by other participants at the summit. "What happened should not affect the way people present or exhibit their work at such events. The idea is not to use an opportunity like this to become something else or create unnecessary noise. "It is essential that a proper code of conduct is followed. There are other countries and other participants involved as well," he told reporters. The India AI Impact Summit, inaugurated by Prime Minister Narendra Modi at Bharat Mandapam on Monday, is being pitched by the government as a flagship gathering to position India as a global AI hub. Delegates from more than 100 countries, including several heads of governments, are attending, alongside industry leaders such as Sundar Pichai of Google. The five-day summit features policy discussions, startup showcases and closed-door meetings on AI governance, infrastructure and innovation. However, its opening day was overshadowed by complaints of overcrowding, long queues and confusion at the venue, prompting organisers to extend exhibition hours and tighten entry management. They say arrangements have since improved. BBC correspondent Vikas Pandey, who is at the summit, said the venue was "absolutely buzzing" on the third day, with thousands of people from different parts of India visiting stalls and soaking up the excitement. Officials say they hope the event and the conversations around it will help adoption of AI across the country. Follow BBC News India on Instagram, YouTube, X and Facebook. How photography helped the British empire classify India Mother and infant burnt to death in Indian state over witchcraft allegations UK doctor stuck in India after police case over Facebook post Tumbler Ridge suspect's ChatGPT account banned before shooting OpenAI said the account's activity did not meet the threshold to flag it to authorities when it was identified. 'Breweries using AI could put artists out of work' As two pubs in Newcastle ban AI art, artists discuss the impact it can have on creatives. Why fake AI videos of UK urban decline are taking over social media Deepfakes showing grim taxpayer-funded waterparks have gone viral and drawn some racist responses. Which teams are through to T20 World Cup Super 8s? What will India, Sri Lanka, West Indies, England, South Africa, New Zealand, Zimbabwe and Pakistan have to do reach the semi-finals of the 2026 T20 World Cup? Men's T20 World Cup tables, top run-scorers & wicket-takers BBC Sport lists the best performing batters and bowlers from the Men's T20 World Cup in India and Sri Lanka. Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_ref-Word_n-gram_language_model_jm_13-0] | [TOKENS: 1793] |
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading |
======================================== |
[SOURCE: https://www.fast.ai/posts/2021-09-02-covid-r.html] | [TOKENS: 2528] |
Australia can, and must, get R under 1.0 Jeremy Howard September 3, 2021 On this page Summary: By using better masks, monitoring and improving indoor air quality, and rolling out rapid tests, we could quickly halt the current outbreaks in the Australian states of New South Wales (NSW) and Victoria. If we fail to do so, and open up before 80% of all Australians are vaccinated, we may have tens of thousands of deaths, and hundreds of thousands of children with chronic illness which could last for years. We can get R under 1.0 Pandemics either grow exponentially, or disappear exponentially. They don’t just stay at some constant level. If the reproduction number R, which is how many people each infected person transmits to, is greater than 1.0 in a region, then the pandemic grows exponentially and becomes out of control (as we see in NSW now), or it is less than 1.0, in which case the virus dies out. No Australian state or territory is currently using any of the three best “bang for your buck” public health interventions: better masks, better ventilation, or rapid testing. Any of these on their own (combined with the existing measures being used in Vic) would likely be enough to get R<1. The combination of them would probably kill off the outbreaks rapidly. At that point life can largely return to normal. Stopping delta is not impossible. Other jurisdictions have done it, including Taiwan and China. New Zealand appears to be well on the way too. There’s no reason Australia can’t join them. Scientists have found that using better masks is the single best way to decrease viral transmission in a close indoor setting. They showed that if all teachers and students wear masks with good fit and filtration, transmission is reduced by a factor of around 300 times. The CDC has found that two free and simple techniques to enhance the fit of surgical masks, “double masking” and “knot and tuck”, both decrease virus exposure by a factor of more than ten compared to wearing a cloth or surgical mask alone. For more information, see my article (with Zeynep Tufekci) in The Atlantic. We now know that covid is airborne. That means that we need clean air. A recent study has shown that the key to managing this is to monitor CO2 levels in indoor spaces. That’s because CO2 levels are a good proxy for how well air is being circulated. Without proper ventilation, CO2 levels go up, and if there are infected people around virus levels go up too. CO2 monitors can be bought in bulk for around $50. Standards should be communicated for what acceptable maximum levels of CO2 are for classrooms, workplaces, and public indoor spaces, and education provided on how to improve air quality. Where CO2 levels can not be controlled, air purifiers with HEPA filtration should be required. Better ventilation can decrease the probability of infection by a factor of 5-10 compared to indoor spaces which do not have good airflow. Rapid antigen lateral flow tests are cheap, and provide testing results within 15-30 minutes. They have very few false positives. A Brisbane-based company, Ellume, has an FDA approved rapid test, and is exporting it around the world. But we’re not using it here in Australia. If every workplace and school required daily rapid tests, around 75% of cases in these locations would be identified. Positive cases would isolate until they have results from a follow-up PCR test. Using this approach, transmission in schools and workplaces would be slashed by nearly three quarters, bringing R well under 1.0. In the UK every child was tested twice a week in the last school term. Recent research suggests that daily rapid tests could allow more students to stay at school. The Grattan Institute found we need to vaccinate at least 80% of the total population (including children) this year, and continue the vaccination rollout to 90% throughout 2022. Clinical trials for the vaccine in kids are finishing this month. If we can quickly ramp up the roll-out to kids, and maintain the existing momentum of vaccinations in adults, we may be able to achieve the 80% goal by the end of the year. It’s important to understand, however, that no single intervention (including vaccination) will control covid. Many countries with high vaccination rates today have high covid death rates, due to waning immunity and unvaccinated groups. The point of all of these interventions is to reduce R. When R is under 1 and cases are under control, restrictions are not needed; otherwise, they are needed. We must get R under 1.0 The Doherty Report predicts that over three hundred thousand children will get symptomatic covid, and over 1.4 million kids will be infected, in the next 6 months if restrictions are reduced when 70% of adults are vaccinated. This may be a significant under-estimate: a recent CDC study predicts that 75% of school-kids would get infected in three months in the absence of vaccines and masks. New research has found that one in seven infected kids may go on to develop “long covid”, a debilitating illness which can impact patients for years. Based on this data, we are looking at two hundred thousand kids (or possibly far more) with chronic illness. The reality may be even worse than this, since that research uses PCR tests to find infected kids, but PCR testing strategies have been shown to fail to identify covid in kids about half the time. Furthermore, this study looked at the alpha variant. The delta variant appears to be about twice as severe. It’s too early to say when, or if, these children will recover. Some viruses such as polio led to life-long conditions, which weren’t discovered until years later. Long covid has a lot of similarities to myalgic encephalomyelitis, which for many people is a completely debilitating life-long condition. In regions which have opened up, such as Florida, schools were “drowning” in cases within one week of starting term. In the UK, lawsuits are now being filed based on the risks being placed on children. Delta rips through unvaccinated populations. For instance, in England delta took hold during May 2021. English schools took a cautious approach, placing school children in “bubbles” which did not mix. After school children were required to go directly home and not mix with anyone else. Nonetheless, within three months, more kids were getting infected than had ever been before. Cases in July 2021 were around double the previous worst month of December 2020. The Doherty Model, which is being used as a foundation for Australian reopening policy, has many modeling and reporting issues which result in the Doherty Report greatly underestimating risks. (These issues are generally a result of how the report was commissioned, rather than being mistakes made by those doing the modeling.) The Doherty Model has to work with incomplete data, such as the very limited information we have about the behavior of the delta variant. The recommended practice in this kind of situation is to not make a single assumption about the premises in a model, but to instead model uncertainty, by including a range of possible values for each uncertain premise. The Doherty Model does not do this. Instead, “point estimates”, that is, a single guess for each premise, are used. And a single output is produced by the model for each scenario. This is a critical deficiency. By failing to account for uncertainty in inputs, or uncertainty in future changes (such as new variants), the model also fails to account for uncertainty in outputs. What’s the probability that the hospitalizations are far more rapid than in their single modeled outcome, such that Australian ICUs are overloaded? We don’t know, because that work hasn’t been done. The Doherty Model makes a critical error in how it handles the Delta variant: “we will assume that the severity of Delta strains approximates Alpha strains”. We now know that it is incorrect: latest estimates are that “People who are infected with the highly contagious Delta variant are twice as likely to be hospitalized as those who are infected with the Alpha variant”. The model also fails to correctly estimate the efficacy of Test, Trace, Isolate, and Quarantine (TTIQ). It assumes that TTIQ will be “optimal” for “hundreds of daily cases”, and “partial” for thousands of cases. However, in NSW optimal TTIQ was no longer maintained after just 50 cases, and the majority of cases were no longer isolating after 100 daily cases. The Doherty Model assumes that vaccines are equally distributed throughout the country. This is mentioned in the report, and has also been confirmed by talking directly with those doing the modeling. However, there are groups where that’s not true. For instance, indigenous communities are only around ⅛ vaccinated. In this group, if restrictions are removed, then R will return towards 5.0 (the reproduction number of delta without vaccines or restrictions). As a result, nearly the entire population will be infected within months. The same thing will happen with kids. The Doherty model fails to model school mixing, but instead makes a simplifying assumption that children have some random chance of meeting random other children each day. In practice however, they have a 100% chance of mixing with exactly the same children every day, at school. The Doherty Model misses the vast majority of cases. That’s because it entirely ignores all cases after 180 days (when most cases occur). Another model has estimated the full impact of covid without such a time limitation. It finds that there would be around 25,000 deaths in Australia in the absence of restrictions. A major problem with the National Plan based on the Doherty Report is that it goes directly from vaccination rate to actions, and bakes in all the model assumptions. It can’t take into account unanticipated changes, such as more transmissible variants, or mass infections of hospital staff. It would be far better to decide actions in terms of measurements that reflect changing current conditions — that is, R and remaining health-care Capacity. The Doherty Institute models could be reported as estimated R and Capacity at 70% and 80% vaccination rates of adults, which is 56% and 64% of the full population. Reducing transmission restrictions when R>1 or there is insufficient remaining capacity would be madness regardless of the vaccination rate. Based on current projections, the best case scenario in one month’s time there will be over 2000 people hospitalized with covid in NSW, with over 350 in ICU. This is going to be a big stretch on the state’s resources. The same will happen in other states that fail to control outbreaks prior to achieving at least 80% vaccination rates of all populations, including children and indigenous communities. Even when most adults are vaccinated, covid doesn’t go away. Immunity wanes after a few months, and there will continue to be groups where fewer people have been vaccinated. We can estimate the longer term impact of covid by looking at other countries. In the UK, 75% of 16+ residents are vaccinated. There are currently 700 covid deaths and 250,000 cases per week in the UK. If our death rate is proportionate, that would mean 266 Australians dying per week even after we get to 75% vaccinated (along with thousands of long covid cases, with their huge economic and societal cost). By comparison, there were 9 weekly deaths from flu in Australia in 2019. Conclusion We are now hearing political leaders in Victoria and NSW giving up on getting the outbreaks under control. But we haven’t yet deployed the three easiest high-impact public health interventions we have at our disposal: better masks, better ventilation, and rapid tests. Any one of these (along with the existing measures) would be likely to neutralize the outbreaks; their impacts combined will be a powerful weapon. If we don’t do this, then covid will leave hundreds of thousands of Australian children with chronic illness, and kill thousands of Australians. This is entirely avoidable. Acknowledgements: Thanks to Dr Rachel Thomas for many discussions about this topic and for draft review. Thanks also to the many Australian scientists with whom I consulted during development of this article. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Markus_Persson#cite_note-27] | [TOKENS: 3525] |
Contents Markus Persson Markus Alexej Persson (/ˈpɪərsən/ ⓘ PEER-sən, Swedish: [ˈmǎrːkɵs ˈpæ̌ːʂɔn] ⓘ; born 1 June 1979), known by the pseudonym Notch, is a Swedish video game programmer and designer. He is the creator of Minecraft, the best-selling video game in history. He founded the video game development company Mojang Studios in 2009. Persson began developing video games at an early age. His commercial success began after he published an early version of Minecraft in 2009. Prior to the game's official retail release in 2011, it had sold over four million copies. After this point Persson stood down as the lead designer and transferred his creative authority to Jens Bergensten. In September 2014 Persson announced his intention to leave Mojang, and in November of that year the company was sold to Microsoft reportedly for US$2.5 billion, which made him a billionaire. Since 2016 several of Persson's posts on Twitter regarding feminism, race, and transgender rights have caused public controversies. He has been described as "an increasingly polarizing figure, tweeting offensive statements regarding race, the LGBTQ community, gender, and other topics." In an effort to distance itself from Persson, Microsoft removed mentions of his name from Minecraft (excluding one instance in the game's end credits) and did not invite him to the game's tenth anniversary celebration. In 2015 he co-founded a separate game studio called Rubberbrain, which was relaunched in 2024 as Bitshift Entertainment. Early life Markus Alexej Persson was born in Stockholm, Sweden, to a Finnish mother, Ritva, and a Swedish father, Birger, on 1 June 1979. He has one sister. He grew up in Edsbyn until he was seven years old, when his family moved back to Stockholm. In Edsbyn, Persson's father worked for the railroad, and his mother was a nurse. He spent much time outdoors in Edsbyn, exploring the woods with his friends. When Persson was about seven years old, his parents divorced, and he and his sister lived with their mother. His father moved to a cabin in the countryside. Persson said in an interview that they experienced food insecurity around once a month. Persson lost contact with his father for several years after the divorce. According to Persson, his father suffered from depression, bipolar disorder, alcoholism, and medication abuse, and went to jail for robberies. While his father had somewhat recovered during Persson's early life, his father relapsed, contributing to the divorce. His sister also experimented with drugs and ran away from home. He had gained interest in video games at an early age. His father was "a really big nerd", who built his own modem and taught Persson to use the family's Commodore 128. On it, Persson played bootleg games and loaded in various type-in programs from computer magazines with the help of his sister. The first game he purchased with his own money was The Bard's Tale. He began programming on his father's Commodore 128 home computer at the age of seven. He produced his first game at the age of eight, a text-based adventure game. By 1994 Persson knew he wanted to become a video game developer, but his teachers advised him to study graphic design, which he did from ages 15 to 18. Persson, although introverted, was well-liked by his peers, but after entering secondary school was a "loner" and reportedly had only one friend. He spent most of his spare time with games and programming at home. He managed to reverse-engineer the Doom engine, which he continued to take great pride in as of 2014[update]. He never finished high school, but was reportedly a good student. Career Persson started his career working as a web designer. He later found employment at Game Federation, where he met Rolf Jansson. The pair worked in their spare time to build the 2006 video game Wurm Online. The game was released through a new entity, "Mojang Specifications AB". Persson left the project in late 2007. As Persson wanted to reuse the name "Mojang", Jansson agreed to rename the company to Onetoofree AB. Between 2004 and 2009 Persson worked as a game developer for Midasplayer (later known as King). There, he worked as a programmer, mostly building browser games made in Flash. He later worked as a programmer for jAlbum. Prior to creating Minecraft, Persson developed multiple, small games. He also entered a number of game design competitions and participated in discussions on the TIGSource forums, a web forum for independent game developers. One of Persson's more notable personal projects was called RubyDung, an isometric three-dimensional base-building game like RollerCoaster Tycoon and Dwarf Fortress. While working on RubyDung, Persson experimented with a first-person view mode similar to that found in Dungeon Keeper. However, he felt the graphics were too pixelated and omitted this mode. In 2009 Persson found inspiration in Infiniminer, a block-based open-ended mining game. Infiniminer heavily influenced his future work on RubyDung, and was behind Persson's reasoning for returning the first-person mode, the "blocky" visual style and the block-building fundamentals to the game. RubyDung is the earliest known Minecraft prototype created by Persson. On 17 May 2009 Persson released the original edition (later called "Classic version") of Minecraft on the TIGSource forums. He regularly updated the game based on feedback from TIGSource users. Persson released several new versions of Minecraft throughout 2009 and 2010, going through several phases of development including Survival Test, Indev, and Infdev. On 30 June 2010 Persson released the game's Alpha version. While working on the pre-Alpha version of Minecraft, Persson continued working at jAlbum. In 2010, after the release and subsequent success of Minecraft's Alpha version, Persson moved from a full-time role to a part-time role at jAlbum. He left jAlbum later that same year. In September 2010 Persson travelled to Valve Corporation's headquarters in Bellevue, Washington, United States, where he took part in a programming exercise and met Gabe Newell. Persson was subsequently offered a job at Valve, which he turned down in order to continue work on Minecraft. On 20 December 2010 Minecraft moved into its beta phase and began expanding to other platforms, including mobile. In January 2011 Minecraft reached one million registered accounts. Six months afterwards, it reached ten million. The game has sold over four million copies by 7 November 2011. Mojang held the first Minecon from 18 to 19 November 2011 to celebrate its full release, and subsequently made it an annual event. Following this, on 11 December 2011, Persson transferred creative control of Minecraft to Jens Bergensten and began working on another game title, 0x10c, although he reportedly abandoned the project around 2013. In 2013 Mojang recorded revenues of $330 million and profits of $129 million. Persson has stated that, due to the intense media attention and public pressure, he became exhausted with running Minecraft and Mojang. In a September 2014 blog post he shared his realization that he "didn't have the connection to my fans I thought I had", that he had "become a symbol", and that he did not wish to be responsible for Mojang's increasingly large operation. In June 2014 Persson tweeted "Anyone want to buy my share of Mojang so I can move on with my life? Getting hate for trying to do the right thing is not my gig", reportedly partly as a joke. Persson controlled a 71% stake in Mojang at the time. The offer attracted significant interest from Activision Blizzard, EA, and Microsoft. Forbes later reported that Microsoft wanted to purchase the game as a "tax dodge" to turn their taxable excess liquid cash into other assets. In September 2014 Microsoft agreed to purchase Mojang for $2.5 billion, making Persson a billionaire. He then left the company after the deal was finalised in November. Since leaving Mojang, Persson has worked on several small projects. On 23 June 2014 he founded a company with Porsér called Rubberbrain AB; the company had no games by 2021, despite spending SEK 60 million. The company was relaunched as Bitshift Entertainment, LLC on 28 March 2024. Persson expressed interest in creating a new video game studio in 2020, and in developing virtual reality games. He has also since created a series of narrative-driven immersive events called ".party()", which uses extensive visual effects and has been hosted in multiple cities. At the beginning of 2025 Persson decided to create a spiritual successor to Minecraft, referred to as "Minecraft 2", in response to the results of a poll on X. However, after speaking to his team, he shortly went against this in favour of developing the other choice on his Twitter poll, a roguelike titled Levers and Chests. Games Persson's most popular creation is the survival sandbox game Minecraft, which was first publicly available on 17 May 2009 and fully released on 18 November 2011. Persson left his job as a game developer to work on Minecraft full-time until completion. In early 2011, Mojang AB sold the one millionth copy of the game, several months later their second, and several more their third. Mojang hired several new staff members for the Minecraft team, while Persson passed the lead developer role to Jens Bergensten. He stopped working on Minecraft after a deal with Microsoft to sell Mojang for $2.5 billion. This brought his net worth to US$1.5 billion. Persson and Jakob Porsér came up with the idea for Scrolls including elements from board games and collectible card games. Persson noted that he will not be actively involved in development of the game and that Porsér will be developing it. Persson revealed on his Tumblr blog on 5 August 2011 that he was being sued by a Swedish law firm representing Bethesda Softworks over the trademarked name of Scrolls, claiming that it conflicted with their The Elder Scrolls series of games. On 17 August 2011 Persson challenged Bethesda to a Quake 3 tournament to decide the outcome of the naming dispute. On 27 September 2011 Persson confirmed that the lawsuit was going to court. ZeniMax Media, owner of Bethesda Softworks, announced the lawsuit's settlement in March 2012. The settlement allowed Mojang to continue using the Scrolls trademark. In 2018, Scrolls was made available free of charge and renamed to Caller's Bane. Cliffhorse is a humorous game programmed in two hours using the Unity game engine and free assets. The game took inspiration from Skyrim's physics engine, "the more embarrassing minimum-effort Greenlight games", Goat Simulator, and Big Rigs: Over the Road Racing. The game was released to Microsoft Windows systems as an early access and honourware game on the first day of E3 2014, instructing users to donate Dogecoin to "buy" the game before downloading it. The game accumulated over 280,000 dogecoins. Following the end to his involvement with Minecraft, Persson began pre-production of an alternate reality space game set in the distant future in March 2012. On April Fools' Day Mojang launched a satirical website for Mars Effect (parody of Mass Effect), citing the lawsuit with Bethesda as an inspiration. However, the gameplay elements remained true and on 4 April, Mojang revealed 0x10c (pronounced "Ten to the C") as a space sandbox title. Persson officially halted game production in August 2013. However, C418, the composer of the game's soundtrack (as well as that of Minecraft), released an album of the work he had made for the game. In 2013, Persson made a free game called Shambles in the Unity game engine. Persson has also participated in several Ludum Dare 48-hour game making competitions. Personal life In 2011 Persson married Elin Zetterstrand, whom he had dated for four years before. Zetterstrand was a former moderator on the Minecraft forums. They had a daughter together, but by mid-2012, he began to see little of her. On 15 August 2012 he announced that he and his wife had filed for divorce. The divorce was finalised later that year. On 14 December 2011 Persson's father committed suicide with a handgun after drinking heavily. In an interview with The New Yorker, Persson said of his father: When I decided I wanted to quit my day job and work on my own games, he was the only person who supported my decision. He was proud of me and made sure I knew. When I added the monsters to Minecraft, he told me that the dark caves became too scary for him. But I think that was the only true criticism I ever heard from him. Persson later admitted that he himself suffered from depression and various highs and lows in his mood. Persson has criticised the stance of large game companies on piracy. He once stated that "piracy is not theft", viewing unauthorised downloads as potential future customers. Persson stated himself to be a member of the Pirate Party of Sweden in 2011. He is also a member of Mensa. He has donated to numerous charities, including Médecins Sans Frontières (Doctors Without Borders). Under his direction, Mojang spent a week developing Catacomb Snatch for the Humble Indie Bundle and raised US$458,248 for charity. He also donated $250,000 to the Electronic Frontier Foundation in 2012. In 2011 he gave $3 million in dividends back to Mojang employees. According to Forbes, his net worth in 2023 was around $1.2 billion. In 2014 Persson was one of the biggest taxpayers in Sweden. Around 2014, he lived in a multi-level penthouse in Östermalm, Stockholm, an area he described as "where the rich people live". In December 2014 Persson purchased a home in Trousdale Estates, a neighbourhood in Beverly Hills, California, in the United States, for $70 million, a record sales price for Beverly Hills at the time. Persson reportedly outbid Beyoncé and Jay-Z for the property. Persson began receiving criticism for political and social opinions he expressed on social media as early as 2016. November 30, 2017 In 2017, he proposed a heterosexual pride holiday, and wrote that those who opposed the idea "deserve to be shot." After facing backlash, he deleted the tweets and rescinded his statements, writing, "So yeah, it's about pride of daring to express, not about pride of being who you are. I get it now." Later in the year, he wrote that feminism is a "social disease" and called the video game developer and feminist Zoë Quinn a "cunt", although he was generally critical of the GamerGate movement. He has described intersectional feminism as a "framework for bigotry" and the use of the word mansplaining as being sexist. Also in 2017, Persson tweeted that "It's okay to be white". Later that year, he stated that he believed in the Pizzagate conspiracy theory. In 2019, he tweeted referencing QAnon, saying "Q is legit. Don't trust the media." Later in 2019, he tweeted in response to a pro-transgender internet meme that, "You are absolutely evil if you want to encourage delusion. What happened to not stigmatizing mental illness?" He then also promoted claims that people were fined for "using the wrong pronoun". However, after facing backlash, he tweeted a day afterwards that he had "no idea what [being trans is] like of course, but it's inspiring as hell when people open up and choose to actually be who they know themselves as. Not because it's a cool choice, because it's a big step. I gues [sic] that's actually cool nvm". Later that year, Microsoft removed two mentions of Persson's name in the "19w13a" snapshot of Minecraft and did not invite him to the 10-year anniversary celebration of the game. A spokesperson for Microsoft stated that his views "do not reflect those of Microsoft or Mojang". He is still mentioned in the End Poem ("a flat, infinite world created by a man called Markus").[citation needed] Awards References External links |
======================================== |
[SOURCE: https://www.bbc.com/news/articles/c70nep7p9j6o] | [TOKENS: 1695] |
Child abuse increasing and more complex to police, crime agency says3 days agoShareSaveKathryn ArmstrongShareSavePA MediaTech companies need to do more to tackle the rising threat of online child abuse, the NCA saysChild sex abuse is becoming increasingly complex to police and officers are arresting an average of 1,000 potential offenders each month, the National Crime Agency (NCA) says. It says an increasing reliance on online platforms and advances in technology, such as AI image creation, are exacerbating the problem, with algorithms and digital communities connecting offenders to share and promote child sex abuse material. According to the NCA, the number of arrests has roughly doubled in the past three years. Statistically, potential offenders are in every community and victims in every school, the NCA said.It added that police cannot address the issue alone and called on technology companies to do more. In one week in January alone, the NCA said it and police forces across the UK arrested 252 people,118 of whom were charged, and safeguarded 407 children. It added that this level of action was now taking place regularly."The scale and prevalence of the CSA [child sexual abuse] threat has increased in severity and complexity over the years," the NCA said in a statement, describing it as "one of the most significant threats across the UK". "On a daily basis, officers at the NCA and across policing are assessing some of the most obscene child abuse imaginable. And this is not hidden in the dark web – it's being shared on social media and is accessible on the clear web as well for anyone to see," said Rob Jones, the NCA's director of general operations."This is the regulated environment that should be the safest part of the system."Jones adds that while offenders are collaborating and co-ordinating their activities on the dark web - an encrypted corner of the internet only accessible using special software designed to make owners digitally untraceable - they are using the mainstream internet as a "discovery platform to identify and abuse vulnerable children"."Due to the way algorithms drive people with like-minded interests together - and because of the way people operate - they will be told what they are doing is normal," the NCA said. It has found that financially motivated sexual extortion (FMSE), especially of young boys, is on the rise - with offenders commissioning livestreamed sexual abuse of children on demand for £20. The agency is cautioning that abuse is not only happening online and that increasing evidence showing that there is a link between the viewing of child sexual abuse material and physical abuse."The response to the continual CSA threat cannot be one for policing alone - a whole‑system approach is the only way to protect children effectively," said Jones. Becky Riggs, National Police Chiefs' Lead for Child Protection and Abuse Investigation, said: "We need technology companies to act with urgency to make their platforms hostile environments for offenders."That means developing and implementing solutions that prevent children from taking, sharing or viewing nude images online, improving the detection of child sexual abuse material, and ensuring platforms are built safer by design."Jess Phillips, Minister for Safeguarding and Violence against Women and Girls, said the government is funding "a network of undercover officers online and a dedicated police taskforce to disrupt crimes, catch offenders and protect children".Prime Minister Sir Keir Starmer has also pledged to respond more quickly to close loopholes in laws designed to protect children online through the Online Safety Act. We will do battle with AI chatbots as we did with Grok, says StarmerHow dark web agent spotted bedroom wall clue to rescue girl from years of harmPaedophile jailed for dark web baby abuse videosOfcomChild abuse Child abuse increasing and more complex to police, crime agency says Child sex abuse is becoming increasingly complex to police and officers are arresting an average of 1,000 potential offenders each month, the National Crime Agency (NCA) says. It says an increasing reliance on online platforms and advances in technology, such as AI image creation, are exacerbating the problem, with algorithms and digital communities connecting offenders to share and promote child sex abuse material. According to the NCA, the number of arrests has roughly doubled in the past three years. Statistically, potential offenders are in every community and victims in every school, the NCA said. It added that police cannot address the issue alone and called on technology companies to do more. In one week in January alone, the NCA said it and police forces across the UK arrested 252 people,118 of whom were charged, and safeguarded 407 children. It added that this level of action was now taking place regularly. "The scale and prevalence of the CSA [child sexual abuse] threat has increased in severity and complexity over the years," the NCA said in a statement, describing it as "one of the most significant threats across the UK". "On a daily basis, officers at the NCA and across policing are assessing some of the most obscene child abuse imaginable. And this is not hidden in the dark web – it's being shared on social media and is accessible on the clear web as well for anyone to see," said Rob Jones, the NCA's director of general operations. "This is the regulated environment that should be the safest part of the system." Jones adds that while offenders are collaborating and co-ordinating their activities on the dark web - an encrypted corner of the internet only accessible using special software designed to make owners digitally untraceable - they are using the mainstream internet as a "discovery platform to identify and abuse vulnerable children". "Due to the way algorithms drive people with like-minded interests together - and because of the way people operate - they will be told what they are doing is normal," the NCA said. It has found that financially motivated sexual extortion (FMSE), especially of young boys, is on the rise - with offenders commissioning livestreamed sexual abuse of children on demand for £20. The agency is cautioning that abuse is not only happening online and that increasing evidence showing that there is a link between the viewing of child sexual abuse material and physical abuse. "The response to the continual CSA threat cannot be one for policing alone - a whole‑system approach is the only way to protect children effectively," said Jones. Becky Riggs, National Police Chiefs' Lead for Child Protection and Abuse Investigation, said: "We need technology companies to act with urgency to make their platforms hostile environments for offenders. "That means developing and implementing solutions that prevent children from taking, sharing or viewing nude images online, improving the detection of child sexual abuse material, and ensuring platforms are built safer by design." Jess Phillips, Minister for Safeguarding and Violence against Women and Girls, said the government is funding "a network of undercover officers online and a dedicated police taskforce to disrupt crimes, catch offenders and protect children". Prime Minister Sir Keir Starmer has also pledged to respond more quickly to close loopholes in laws designed to protect children online through the Online Safety Act. We will do battle with AI chatbots as we did with Grok, says Starmer How dark web agent spotted bedroom wall clue to rescue girl from years of harm Paedophile jailed for dark web baby abuse videos Porn site fined £800,000 for not rolling out age checks Consultation ends over STV news programme cuts Pornhub to restrict access for UK users from next week 'Monster' taxi driver who raped girl jailed Riyasth Hussain attacked the girl and woman between 2004 and 2008, a court hears. Tech firms will have 48 hours to remove abusive images under new law The government is proposing that intimate image abuse should be treated more severely. How dark web agent spotted bedroom wall clue to rescue girl from years of harm Detectives desperate to locate a 12-year-old, seen abused online, found a surprising lead. Ex-church minister who admitted child sexual abuse to BBC still free years later Robert Corfield, who admitted abusing a child to the BBC, was a minister in a shadowy church known as The Truth. 'Predator' jailed for manipulating women to abuse children Elliot Jones and three women who he "coerced" into helping are jailed for abusing eight children. Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. |
======================================== |
[SOURCE: https://www.bbc.com/news/articles/cx2rg2vndk1o] | [TOKENS: 1796] |
Social media suspended in Gabon for 'spreading of false information'3 days agoShareSaveWycliffe MuiaandPaul Njie,BBC AfricaShareSaveGetty ImagesNetBlocks reports that multiple online platforms have been restricted, including Facebook, Instagram, TikTok, YouTube and WhatsAppGabon's media regulator has announced the suspension of social media platforms "until further notice", saying online content has fuelled conflict and deepened divisions in the country.In a televised statement on Tuesday evening, the High Authority for Communication (HAC) cited the "spread of false information", "cyberbullying" and the "unauthorised disclosure of personal data" as reasons for the decision.Internet monitoring group NetBlocks reported that by Wednesday afternoon multiple online platforms had been restricted, including Facebook, Instagram, TikTok, YouTube and WhatsApp.Gabon is led by Gen Brice Oligui Nguema, who won presidential elections last year after leading a coup in 2023.The 50-year-old president is facing growing social unrest, with teachers and civil servants staging strikes over pay and working conditions.According to Netblocks, most internet providers had blocked access to the social media platforms, though its data showed that Gabon Telecom, the country's largest telecoms firm, was allowing very limited access.The HAC's announcement has come as a shock to the central African nation of about 2.5 million people, where social media is particular popular with younger people who use it for business as well as pleasure.Speaking on condition of anonymity, a restaurant owner in the capital, Libreville, told the BBC the suspension would greatly affect his business, since he uses social media for promotion."Almost 40% of my customers decided to order or come to the restaurant after seeing our advertising on social media… I won't be able to catch new customers, because clients are attracted by what they are seeing, reviews from friends, pictures," he said."We are entering a phase where we don't even know if we are moving forward with global development or if we are sliding backward into total underdevelopment."However, a taxi driver seemed unbothered about the move, telling the BBC: "There's no smoke without fire."For the authorities to take such a decision, something must have certainly prompted it."Why Gabon's coup leader is bucking a trend by embracing democracyThe Gabon strongman who stormed to election victoryNguema won last year's poll with more than 90% of the vote, two years after his coup ended more than five decades of rule by the Bongo family.At the time he pledged to reform Gabon, a small, oil- and timber-rich country, where digital blackouts were used by the previous governments to control information.For the first time, foreign and independent media were allowed to film the ballot count during the election.The media regulator spokesman, Jean-Claude Mendome, said the suspension was prompted by the recurring dissemination on social networks and digital platforms of " inappropriate, defamatory, hateful, and insulting content that undermines human dignity, social cohesion, the stability of the republic's institutions, and national security".Such actions, he said, were likely to "generate social conflict" and "seriously jeopardise national unity, democratic progress, and achievements".But "freedom of expression, including freedom of comment and criticism," remained "a fundamental right enshrined in Gabon", Mendome added.School teachers in Gabon began striking in December over pay and working conditions, with protests over similar grievances spreading to other public sectors, including health and education.More BBC stories on Gabon: Why does France have military bases in Africa?Self-medicating gorillas may hold new drugs cluesHow a sex abuse ring targeted Gabon's child footballersGetty Images/BBCGo to BBCAfrica.com for more news from the African continent.Follow us on Twitter @BBCAfrica, on Facebook at BBC Africa or on Instagram at bbcafricaBBC Africa podcastsFocus on AfricaThis Is AfricaSocial mediaGabonAfrica Social media suspended in Gabon for 'spreading of false information' Gabon's media regulator has announced the suspension of social media platforms "until further notice", saying online content has fuelled conflict and deepened divisions in the country. In a televised statement on Tuesday evening, the High Authority for Communication (HAC) cited the "spread of false information", "cyberbullying" and the "unauthorised disclosure of personal data" as reasons for the decision. Internet monitoring group NetBlocks reported that by Wednesday afternoon multiple online platforms had been restricted, including Facebook, Instagram, TikTok, YouTube and WhatsApp. Gabon is led by Gen Brice Oligui Nguema, who won presidential elections last year after leading a coup in 2023. The 50-year-old president is facing growing social unrest, with teachers and civil servants staging strikes over pay and working conditions. According to Netblocks, most internet providers had blocked access to the social media platforms, though its data showed that Gabon Telecom, the country's largest telecoms firm, was allowing very limited access. The HAC's announcement has come as a shock to the central African nation of about 2.5 million people, where social media is particular popular with younger people who use it for business as well as pleasure. Speaking on condition of anonymity, a restaurant owner in the capital, Libreville, told the BBC the suspension would greatly affect his business, since he uses social media for promotion. "Almost 40% of my customers decided to order or come to the restaurant after seeing our advertising on social media… I won't be able to catch new customers, because clients are attracted by what they are seeing, reviews from friends, pictures," he said. "We are entering a phase where we don't even know if we are moving forward with global development or if we are sliding backward into total underdevelopment." However, a taxi driver seemed unbothered about the move, telling the BBC: "There's no smoke without fire. "For the authorities to take such a decision, something must have certainly prompted it." Why Gabon's coup leader is bucking a trend by embracing democracy The Gabon strongman who stormed to election victory Nguema won last year's poll with more than 90% of the vote, two years after his coup ended more than five decades of rule by the Bongo family. At the time he pledged to reform Gabon, a small, oil- and timber-rich country, where digital blackouts were used by the previous governments to control information. For the first time, foreign and independent media were allowed to film the ballot count during the election. The media regulator spokesman, Jean-Claude Mendome, said the suspension was prompted by the recurring dissemination on social networks and digital platforms of " inappropriate, defamatory, hateful, and insulting content that undermines human dignity, social cohesion, the stability of the republic's institutions, and national security". Such actions, he said, were likely to "generate social conflict" and "seriously jeopardise national unity, democratic progress, and achievements". But "freedom of expression, including freedom of comment and criticism," remained "a fundamental right enshrined in Gabon", Mendome added. School teachers in Gabon began striking in December over pay and working conditions, with protests over similar grievances spreading to other public sectors, including health and education. More BBC stories on Gabon: Why does France have military bases in Africa? Self-medicating gorillas may hold new drugs clues How a sex abuse ring targeted Gabon's child footballers Go to BBCAfrica.com for more news from the African continent. Follow us on Twitter @BBCAfrica, on Facebook at BBC Africa or on Instagram at bbcafrica Focus on Africa This Is Africa Why fake AI videos of UK urban decline are taking over social media Starmer 'appeasing' big tech firms, says online safety campaigner Zuckerberg defends Meta in landmark social media addiction trial Catch of the day: Pictures from spectacular Nigerian fishing festival Days of competition culminate in a fishing contest rooted in efforts to cement local peace. Rare prison sentences handed to Cameroon soldiers after killing of 21 civilians In a rare occurence, three soldiers were handed jail terms for killings in the troubled Anglophone region. Tunisian MP jailed for eight months over posts mocking president The MP was arrested this month after mocking the president's handling of the recent floods in the country. South African farmers fear devastation as foot-and-mouth takes hold The government has begun a vaccination programme but officials have been blamed for a slow response. Lion DNA helps convict poachers for first time Investigators reveal how they were able to identify a missing animal using a database of lions in Zimbabwe. Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/World#cite_note-38] | [TOKENS: 5641] |
Contents World The world is the totality of entities, the whole of reality, or everything that exists. The nature of the world has been conceptualized differently in different fields. Some conceptions see the world as unique, while others talk of a "plurality of worlds". Some treat the world as one simple object, while others analyze the world as a complex made up of parts. In scientific cosmology, the world or universe is commonly defined as "the totality of all space and time; all that is, has been, and will be". Theories of modality talk of possible worlds as complete and consistent ways how things could have been. Phenomenology, starting from the horizon of co-given objects present in the periphery of every experience, defines the world as the biggest horizon, or the "horizon of all horizons". In philosophy of mind, the world is contrasted with the mind as that which is represented by the mind. Theology conceptualizes the world in relation to God, for example, as God's creation, as identical to God, or as the two being interdependent. In religions, there is a tendency to downgrade the material or sensory world in favor of a spiritual world to be sought through religious practice. A comprehensive representation of the world and our place in it, as is found in religions, is known as a worldview. Cosmogony is the field that studies the origin or creation of the world, while eschatology refers to the science or doctrine of the last things or of the end of the world. In various contexts, the term "world" takes a more restricted meaning associated, for example, with the Earth and all life on it, with humanity as a whole, or with an international or intercontinental scope. In this sense, world history refers to the history of humanity as a whole, and world politics is the discipline of political science studying issues that transcend nations and continents. Other examples include terms such as "world religion", "world language", "world government", "world war", "world population", "world economy", or "world championship". Etymology The English word world comes from the Old English weorold. The Old English is a reflex of the Common Germanic *weraldiz, a compound of weraz 'man' and aldiz 'age', thus literally meaning roughly 'age of man'; this word led to Old Frisian warld, Old Saxon werold, Old Dutch werolt, Old High German weralt, and Old Norse verǫld. The corresponding word in Latin is mundus, literally 'clean, elegant', itself a loan translation of Greek cosmos 'orderly arrangement'. While the Germanic word thus reflects a mythological notion of a "domain of Man" (compare Midgard), presumably as opposed to the divine sphere on the one hand and the chthonic sphere of the underworld on the other, the Greco-Latin term expresses a notion of creation as an act of establishing order out of chaos. Conceptions Different fields often work with quite different conceptions of the essential features associated with the term "world". Some conceptions see the world as unique: there can be no more than one world. Others talk of a "plurality of worlds". Some see worlds as complex things composed of many substances as their parts while others hold that worlds are simple in the sense that there is only one substance: the world as a whole. Some characterize worlds in terms of objective spacetime while others define them relative to the horizon present in each experience. These different characterizations are not always exclusive: it may be possible to combine some without leading to a contradiction. Most of them agree that worlds are unified totalities. Monism is a thesis about oneness: that only one thing exists in a certain sense. The denial of monism is pluralism, the thesis that, in a certain sense, more than one thing exists. There are many forms of monism and pluralism, but in relation to the world as a whole, two are of special interest: existence monism/pluralism and priority monism/pluralism. Existence monism states that the world is the only concrete object there is. This means that all the concrete "objects" we encounter in our daily lives, including apples, cars and ourselves, are not truly objects in a strict sense. Instead, they are just dependent aspects of the world-object. Such a world-object is simple in the sense that it does not have any genuine parts. For this reason, it has also been referred to as "blobject" since it lacks an internal structure like a blob. Priority monism allows that there are other concrete objects besides the world. But it holds that these objects do not have the most fundamental form of existence, that they somehow depend on the existence of the world. The corresponding forms of pluralism state that the world is complex in the sense that it is made up of concrete, independent objects. Scientific cosmology can be defined as the science of the universe as a whole. In it, the terms "universe" and "cosmos" are usually used as synonyms for the term "world". One common definition of the world/universe found in this field is as "[t]he totality of all space and time; all that is, has been, and will be". Some definitions emphasize that there are two other aspects to the universe besides spacetime: forms of energy or matter, like stars and particles, and laws of nature. World-conceptions in this field differ both concerning their notion of spacetime and of the contents of spacetime. The theory of relativity plays a central role in modern cosmology and its conception of space and time. A difference from its predecessors is that it conceives space and time not as distinct dimensions but as a single four-dimensional manifold called spacetime. This can be seen in special relativity in relation to the Minkowski metric, which includes both spatial and temporal components in its definition of distance. General relativity goes one step further by integrating the concept of mass into the concept of spacetime as its curvature. Quantum cosmology uses a classical notion of spacetime and conceives the whole world as one big wave function expressing the probability of finding particles in a given location. The world-concept plays a role in many modern theories of modality, sometimes in the form of possible worlds. A possible world is a complete and consistent way how things could have been. The actual world is a possible world since the way things are is a way things could have been. There are many other ways things could have been besides how they actually are. For example, Hillary Clinton did not win the 2016 US election, but she could have won. So there is a possible world in which she did. There is a vast number of possible worlds, one corresponding to each such difference, no matter how small or big, as long as no outright contradictions are introduced this way. Possible worlds are often conceived as abstract objects, for example, in terms of non-obtaining states of affairs or as maximally consistent sets of propositions. On such a view, they can even be seen as belonging to the actual world. Another way to conceive possible worlds, made famous by David Lewis, is as concrete entities. On this conception, there is no important difference between the actual world and possible worlds: both are conceived as concrete, inclusive and spatiotemporally connected. The only difference is that the actual world is the world we live in, while other possible worlds are not inhabited by us but by our counterparts. Everything within a world is spatiotemporally connected to everything else but the different worlds do not share a common spacetime: They are spatiotemporally isolated from each other. This is what makes them separate worlds. It has been suggested that, besides possible worlds, there are also impossible worlds. Possible worlds are ways things could have been, so impossible worlds are ways things could not have been. Such worlds involve a contradiction, like a world in which Hillary Clinton both won and lost the 2016 US election. Both possible and impossible worlds have in common the idea that they are totalities of their constituents. Within phenomenology, worlds are defined in terms of horizons of experiences. When we perceive an object, like a house, we do not just experience this object at the center of our attention but also various other objects surrounding it, given in the periphery. The term "horizon" refers to these co-given objects, which are usually experienced only in a vague, indeterminate manner. The perception of a house involves various horizons, corresponding to the neighborhood, the city, the country, the Earth, etc. In this context, the world is the biggest horizon or the "horizon of all horizons". It is common among phenomenologists to understand the world not just as a spatiotemporal collection of objects but as additionally incorporating various other relations between these objects. These relations include, for example, indication-relations that help us anticipate one object given the appearances of another object and means-end-relations or functional involvements relevant for practical concerns. In philosophy of mind, the term "world" is commonly used in contrast to the term "mind" as that which is represented by the mind. This is sometimes expressed by stating that there is a gap between mind and world and that this gap needs to be overcome for representation to be successful. One problem in philosophy of mind is to explain how the mind is able to bridge this gap and to enter into genuine mind-world-relations, for example, in the form of perception, knowledge or action. This is necessary for the world to be able to rationally constrain the activity of the mind. According to a realist position, the world is something distinct and independent from the mind. Idealists conceive of the world as partially or fully determined by the mind. Immanuel Kant's transcendental idealism, for example, posits that the spatiotemporal structure of the world is imposed by the mind on reality but lacks independent existence otherwise. A more radical idealist conception of the world can be found in Berkeley's subjective idealism, which holds that the world as a whole, including all everyday objects like tables, cats, trees and ourselves, "consists of nothing but minds and ideas". Different theological positions hold different conceptions of the world based on its relation to God. Classical theism states that God is wholly distinct from the world. But the world depends for its existence on God, both because God created the world and because He maintains or conserves it. This is sometimes understood in analogy to how humans create and conserve ideas in their imagination, with the difference being that the divine mind is vastly more powerful. On such a view, God has absolute, ultimate reality in contrast to the lower ontological status ascribed to the world. God's involvement in the world is often understood along the lines of a personal, benevolent God who looks after and guides His creation. Deists agree with theists that God created the world but deny any subsequent, personal involvement in it. Pantheists reject the separation between God and world. Instead, they claim that the two are identical. This means that there is nothing to the world that does not belong to God and that there is nothing to God beyond what is found in the world. Panentheism constitutes a middle ground between theism and pantheism. Against theism, it holds that God and the world are interrelated and depend on each other. Against pantheism, it holds that there is no outright identity between the two. History of philosophy In philosophy, the term world has several possible meanings. In some contexts, it refers to everything that makes up reality or the physical universe. In others, it can mean have a specific ontological sense (see world disclosure). While clarifying the concept of world has arguably always been among the basic tasks of Western philosophy, this theme appears to have been raised explicitly only at the start of the twentieth century, Plato is well known for his theory of forms, which posits the existence of two different worlds: the sensible world and the intelligible world. The sensible world is the world we live in, filled with changing physical things we can see, touch and interact with. The intelligible world is the world of invisible, eternal, changeless forms like goodness, beauty, unity and sameness. Plato ascribes a lower ontological status to the sensible world, which only imitates the world of forms. This is due to the fact that physical things exist only to the extent that they participate in the forms that characterize them, while the forms themselves have an independent manner of existence. In this sense, the sensible world is a mere replication of the perfect exemplars found in the world of forms: it never lives up to the original. In the allegory of the cave, Plato compares the physical things we are familiar with to mere shadows of the real things. But not knowing the difference, the prisoners in the cave mistake the shadows for the real things. Two definitions that were both put forward in the 1920s, however, suggest the range of available opinion. "The world is everything that is the case", wrote Ludwig Wittgenstein in his influential Tractatus Logico-Philosophicus, first published in 1921. Martin Heidegger, meanwhile, argued that "the surrounding world is different for each of us, and notwithstanding that we move about in a common world". "World" is one of the key terms in Eugen Fink's philosophy. He thinks that there is a misguided tendency in western philosophy to understand the world as one enormously big thing containing all the small everyday things we are familiar with. He sees this view as a form of forgetfulness of the world and tries to oppose it by what he calls the "cosmological difference": the difference between the world and the inner-worldly things it contains. On his view, the world is the totality of the inner-worldly things that transcends them. It is itself groundless but it provides a ground for things. It therefore cannot be identified with a mere container. Instead, the world gives appearance to inner-worldly things, it provides them with a place, a beginning and an end. One difficulty in investigating the world is that we never encounter it since it is not just one more thing that appears to us. This is why Fink uses the notion of play or playing to elucidate the nature of the world. He sees play as a symbol of the world that is both part of it and that represents it. Play usually comes with a form of imaginary play-world involving various things relevant to the play. But just like the play is more than the imaginary realities appearing in it so the world is more than the actual things appearing in it. The concept of worlds plays a central role in Nelson Goodman's late philosophy. He argues that we need to posit different worlds in order to account for the fact that there are different incompatible truths found in reality. Two truths are incompatible if they ascribe incompatible properties to the same thing. This happens, for example, when we assert both that the earth moves and that the earth is at rest. These incompatible truths correspond to two different ways of describing the world: heliocentrism and geocentrism. Goodman terms such descriptions "world versions". He holds a correspondence theory of truth: a world version is true if it corresponds to a world. Incompatible true world versions correspond to different worlds. It is common for theories of modality to posit the existence of a plurality of possible worlds. But Goodman's theory is different since it posits a plurality not of possible but of actual worlds. Such a position is in danger of involving a contradiction: there cannot be a plurality of actual worlds if worlds are defined as maximally inclusive wholes. This danger may be avoided by interpreting Goodman's world-concept not as maximally inclusive wholes in the absolute sense but in relation to its corresponding world-version: a world contains all and only the entities that its world-version describes. Religion Mythological cosmologies depict the world as centered on an axis mundi and delimited by a boundary such as a world ocean, a world serpent or similar. Hinduism constitutes a family of religious-philosophical views. These views present perspectives on the nature and role of the world. Samkhya philosophy, for example, is a metaphysical dualism that understands reality as comprising 2 parts: purusha and prakriti. The term "purusha" stands for the individual conscious self that each of "us" possesses. Prakriti, on the other hand, is the 1 world inhabited by all these selves. Samkhya understands this world as a world of matter governed by the law of cause and effect. The term "matter" is understood in a sense in this tradition including physical and mental aspects. This is reflected in the doctrine of tattvas, according to which prakriti is made up of 23 principles or elements of reality. These principles include physical elements, like water or earth, and mental aspects, like intelligence or sense-impressions. The relation between purusha and prakriti is conceived as 1 of observation: purusha is the conscious self aware of the world of prakriti and does not causally interact with it. A conception of the world is present in Advaita Vedanta, the monist school among the Vedanta schools. Unlike the realist position defended in Samkhya philosophy, Advaita Vedanta sees the world of multiplicity as an illusion, referred to as Maya. This illusion includes impression of existing as separate experiencing selfs called Jivas. Instead, Advaita Vedanta teaches that on the most fundamental level of reality, referred to as Brahman, there exists no plurality or difference. All there is is 1 all-encompassing self: Atman. Ignorance is seen as the source of this illusion, which results in bondage to the world of mere appearances. Liberation is possible in the course of overcoming this illusion by acquiring the knowledge of Brahman, according to Advaita Vedanta. Contemptus mundi is the name given to the belief that the world, in all its vanity, is nothing more than a futile attempt to hide from God by stifling our desire for the good and the holy. This view has been characterised as a "pastoral of fear" by historian Jean Delumeau. "The world, the flesh, and the devil" is a traditional division of the sources of temptation. Orbis Catholicus is a Latin phrase meaning "Catholic world", per the expression Urbi et Orbi, and refers to that area of Christendom under papal supremacy. In Islam, the term "dunya" is used for the world. Its meaning is derived from the root word "dana", a term for "near". It is associated with the temporal, sensory world and earthly concerns, i.e. with this world in contrast to the spiritual world. Religious teachings warn of a tendency to seek happiness in this world and advise a more ascetic lifestyle concerned with the afterlife. Other strands in Islam recommend a balanced approach. In Mandaean cosmology, the world or earthly realm is known as Tibil. It is separated from the World of Light (alma d-nhūra) above and the World of Darkness (alma d-hšuka) below by aether (ayar). Related terms and problems A worldview is a comprehensive representation of the world and our place in it. As a representation, it is a subjective perspective of the world and thereby different from the world it represents. All higher animals need to represent their environment in some way in order to navigate it. But it has been argued that only humans possess a representation encompassing enough to merit the term "worldview". Philosophers of worldviews commonly hold that the understanding of any object depends on a worldview constituting the background on which this understanding can take place. This may affect not just our intellectual understanding of the object in question but the experience of it in general. It is therefore impossible to assess one's worldview from a neutral perspective since this assessment already presupposes the worldview as its background. Some hold that each worldview is based on a single hypothesis that promises to solve all the problems of our existence we may encounter. On this interpretation, the term is closely associated to the worldviews given by different religions. Worldviews offer orientation not just in theoretical matters but also in practical matters. For this reason, they usually include answers to the question of the meaning of life and other evaluative components about what matters and how we should act. A worldview can be unique to one individual but worldviews are usually shared by many people within a certain culture or religion. The idea that there exist many different worlds is found in various fields. For example, theories of modality talk about a plurality of possible worlds and the many-worlds interpretation of quantum mechanics carries this reference even in its name. Talk of different worlds is also common in everyday language, for example, with reference to the world of music, the world of business, the world of football, the world of experience or the Asian world. But at the same time, worlds are usually defined as all-inclusive totalities. This seems to contradict the very idea of a plurality of worlds since if a world is total and all-inclusive then it cannot have anything outside itself. Understood this way, a world can neither have other worlds besides itself or be part of something bigger. One way to resolve this paradox while holding onto the notion of a plurality of worlds is to restrict the sense in which worlds are totalities. On this view, worlds are not totalities in an absolute sense. This might be even understood in the sense that, strictly speaking, there are no worlds at all. Another approach understands worlds in a schematic sense: as context-dependent expressions that stand for the current domain of discourse. So in the expression "Around the World in Eighty Days", the term "world" refers to the earth while in the colonial expression "the New World" it refers to the landmass of North and South America. Cosmogony is the field that studies the origin or creation of the world. This includes both scientific cosmogony and creation myths found in various religions. The dominant theory in scientific cosmogony is the Big Bang theory, according to which both space, time and matter have their origin in one initial singularity occurring about 13.8 billion years ago. This singularity was followed by an expansion that allowed the universe to sufficiently cool down for the formation of subatomic particles and later atoms. These initial elements formed giant clouds, which would then coalesce into stars and galaxies. Non-scientific creation myths are found in many cultures and are often enacted in rituals expressing their symbolic meaning. They can be categorized concerning their contents. Types often found include creation from nothing, from chaos or from a cosmic egg. Eschatology refers to the science or doctrine of the last things or of the end of the world. It is traditionally associated with religion, specifically with the Abrahamic religions. In this form, it may include teachings both of the end of each individual human life and of the end of the world as a whole. But it has been applied to other fields as well, for example, in the form of physical eschatology, which includes scientifically based speculations about the far future of the universe. According to some models, there will be a Big Crunch in which the whole universe collapses back into a singularity, possibly resulting in a second Big Bang afterward. But current astronomical evidence seems to suggest that our universe will continue to expand indefinitely. World history studies the world from a historical perspective. Unlike other approaches to history, it employs a global viewpoint. It deals less with individual nations and civilizations, which it usually approaches at a high level of abstraction. Instead, it concentrates on wider regions and zones of interaction, often interested in how people, goods and ideas move from one region to another. It includes comparisons of different societies and civilizations as well as considering wide-ranging developments with a long-term global impact like the process of industrialization. Contemporary world history is dominated by three main research paradigms determining the periodization into different epochs. One is based on productive relations between humans and nature. The two most important changes in history in this respect were the introduction of agriculture and husbandry concerning the production of food, which started around 10,000 to 8,000 BCE and is sometimes termed the Neolithic Revolution, and the Industrial Revolution, which started around 1760 CE and involved the transition from manual to industrial manufacturing. Another paradigm, focusing on culture and religion instead, is based on Karl Jaspers' theories about the Axial Age, a time in which various new forms of religious and philosophical thoughts appeared in several separate parts of the world around the time between 800 and 200 BCE. A third periodization is based on the relations between civilizations and societies. According to this paradigm, history can be divided into three periods in relation to the dominant region in the world: Middle Eastern dominance before 500 BCE, Eurasian cultural balance until 1500 CE and Western dominance since 1500 CE. Big History employs an even wider framework than world history by putting human history into the context of the history of the universe as a whole. It starts with the Big Bang and traces the formation of galaxies, the Solar System, the Earth, its geological eras, the evolution of life and humans until the present day. World politics, also referred to as global politics or international relations, is the discipline of political science studying issues of interest to the world that transcend nations and continents. It aims to explain complex patterns found in the social world that are often related to the pursuit of power, order and justice, usually in the context of globalization. It focuses not just on the relations between nation-states but also considers other transnational actors, like multinational corporations, terrorist groups, or non-governmental organizations. For example, it tries to explain events such as the September 11 attacks, the 2003 invasion of Iraq or the 2008 financial crisis. Various theories have been proposed in order to deal with the complexity involved in formulating such explanations. These theories are sometimes divided into realism, liberalism and constructivism. Realists see nation-states as the main actors in world politics. They constitute an anarchical international system without any overarching power to control their behavior. They are seen as sovereign agents that, determined by human nature, act according to their national self-interest. Military force may play an important role in the ensuing struggle for power between states, but diplomacy and cooperation are also key mechanisms for nations to achieve their goals. Liberalists acknowledge the importance of states but they also emphasize the role of transnational actors, like the United Nations or the World Trade Organization. They see humans as perfectible and stress the role of democracy in this process. The emergent order in world politics, on this perspective, is more complex than a mere balance of power since more different agents and interests are involved in its production. Constructivism ascribes more importance to the agency of individual humans than realism and liberalism. It understands the social world as a construction of the people living in it. This leads to an emphasis on the possibility of change. If the international system is an anarchy of nation-states, as the realists hold, then this is only so because we made it this way and may change since this is not prefigured by human nature, according to the constructivists. See also References External links Africa Antarctica Asia Australia Europe North America South America Afro-Eurasia Americas Eurasia Oceania |
======================================== |
[SOURCE: https://www.fast.ai/posts/2021-08-17-eleven-videos.html] | [TOKENS: 615] |
11 Short Videos About AI Ethics Rachel Thomas August 16, 2021 I made a playlist of 11 short videos (most are 6-13 mins long) on Ethics in Machine Learning. This is from my ethics lecture in Practical Deep Learning for Coders v4. I thought these short videos would be easier to watch, share, or skip around. What are Ethics and Why do they Matter? Machine Learning Edition: Through 3 key case studies, I cover how people can be harmed by machine learning gone wrong, why we as machine learning practitioners should care, and what tech ethics are. All machine learning systems need ways to identify & address mistakes. It is crucial that all machine learning systems are implemented with ways to correctly surface and correct mistakes, and to provide recourse to those harmed. The Problem with Metrics, Feedback Loops, and Hypergrowth: Overreliance on metrics is a core problem both in the field of machine learning and in the tech industry more broadly. As Goodhart’s Law tells us, when a measure becomes the target, it ceases to be a good measure, yet the incentives of venture capital push companies in this direction. We see out-of-control feedback loops, widespread gaming of metrics, and people being harmed as a result. Not all types of bias are fixed by diversifying your dataset. The idea of bias is often too general to be useful. There are several different types of bias, and different types require different interventions to try to address them. Through a series of cases studies, we will go deeper into some of the various causes of bias. Humans are biased too, so why does machine learning bias matter? A common objection to concerns about bias in machine learning models is to point out that humans are really biased too. This is correct, yet machine learning bias differs from human bias in several key ways that we need to understand and which can heighten the impact. 7 Questions to Ask About Your Machine Learning Project What You Need to Know about Disinformation: With a particular focus on how machine learning advances can contribute to disinformation, this covers some of the fundamental things to understand. Foundations of Ethics: We consider different lenses through which to evaluate ethics, and what sort of questions to ask. Tech Ethics Practices to Implement at your Workplace: Practical tech ethics practices you can implement at your workplace. How to Address the Machine Learning Diversity Crisis: Only 12% of machine learning researchers are women. Based on research studies, I outline some evidence-based steps to take towards addressing this diversity crisis. Advanced Technology is not a Substitute for Good Policy: We will look at some examples of what incentives cause companies to change their behavior or not (e.g. being warned for years of your role in an escalating genocide vs. threat of a hefty fine), how many AI ethics concerns are actually about human rights, and case studies of what happened when regulation & safety standards came to other industries. You can find the playlist of 11 short videos here. And here is a longer, full-length free fast.ai course on practical data ethics. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Panentheism] | [TOKENS: 4836] |
Contents Panentheism Panentheism (/pæˈnɛnθiɪzəm/; "all in God", from the Greek πᾶν, pân, 'all', ἐν, en, 'in' and Θεός, Theós, 'God') is the belief that the divine intersects every part of the universe and also extends beyond space and time. The term was coined by the German philosopher Karl Krause in 1828 (after reviewing Hindu scripture) to distinguish the ideas of Georg Wilhelm Friedrich Hegel (1770–1831) and Friedrich Wilhelm Joseph Schelling (1775–1854) about the relation of God and the universe from the supposed pantheism of Baruch Spinoza. Unlike pantheism, which holds that the divine and the universe are identical, panentheism maintains an ontological distinction between the divine and the non-divine and the significance of both. In panentheism, the universal spirit is present everywhere, which at the same time "transcends" all things created. Whilst pantheism asserts that "all is God", panentheism claims that God is greater than the universe. Some versions of panentheism suggest that the universe is nothing more than the manifestation of God. In addition, some forms indicate that the universe is contained within God, like in the Kabbalistic concept of Tzimtzum, or with the Sufi concept of Wahdat al-wujud. Much of Hindu thought is highly characterized by panentheism and pantheism. In philosophy The religious beliefs of Neoplatonism can be regarded as panentheistic. Plotinus taught that there was an ineffable transcendent God ("the One", to En, τὸ Ἕν) of which subsequent realities were emanations. From "the One" emanates the Divine Mind (Nous, Νοῦς) and the Cosmic Soul (Psyche, Ψυχή). In Neoplatonism the world itself is God (according to Plato's Timaeus 37). This concept of divinity is associated with that of the Logos (Λόγος), which had originated centuries earlier with Heraclitus (c. 535–475 BC). The Logos pervades the cosmos, whereby all thoughts and all things originate, or as Heraclitus said: "He who hears not me but the Logos will say: All is one." Neoplatonists such as Iamblichus attempted to reconcile this perspective by adding another hypostasis above the original monad of force or Dynamis (Δύναμις). This new all-pervasive monad encompassed all creation and its original uncreated emanations. Baruch Spinoza later claimed that "Whatsoever is, is in God, and without God nothing can be, or be conceived." "Individual things are nothing but modifications of the attributes of God, or modes by which the attributes of God are expressed in a fixed and definite manner." Though Spinoza has been called the "prophet" and "prince" of pantheism, in a letter to Henry Oldenburg Spinoza states that: "as to the view of certain people that I identify god with nature (taken as a kind of mass or corporeal matter), they are quite mistaken". For Spinoza, our universe (cosmos) is a mode under two attributes of Thought and Extension. God has infinitely many other attributes which are not present in our world. According to German philosopher Karl Jaspers, when Spinoza wrote "Deus sive Natura" (God or Nature) Spinoza did not mean to say that God and Nature are interchangeable terms, but rather that God's transcendence was attested by God's infinitely many attributes, and that two attributes known by humans, namely Thought and Extension, signified God's immanence. Furthermore, Martial Guéroult suggested the term panentheism, rather than pantheism to describe Spinoza's view of the relation between God and the world. The world is not God, but it is, in a strong sense, "in" God. Yet, American philosopher and self-described panentheist Charles Hartshorne referred to Spinoza's philosophy as "classical pantheism" and distinguished Spinoza's philosophy from panentheism. In 1828, the German philosopher Karl Christian Friedrich Krause (1781–1832) seeking to reconcile monotheism and pantheism, coined the term panentheism (from the Ancient Greek expression πᾶν ἐν θεῷ, pān en theṓ, literally "all in god"). This conception of God influenced New England transcendentalists such as Ralph Waldo Emerson. The term was popularized by Charles Hartshorne in his development of process theology and has also been closely identified with the New Thought. The formalization of this term in the West in the 19th century was not new; philosophical treatises had been written on it in the context of Hinduism for millennia. Philosophers who embraced panentheism have included Thomas Hill Green (1839–1882), James Ward (1843–1925), Andrew Seth Pringle-Pattison (1856–1931) and Samuel Alexander (1859–1938). Beginning in the 1940s, Hartshorne examined numerous conceptions of God. He reviewed and discarded pantheism, deism, and pandeism in favor of panentheism, finding that such a "doctrine contains all of deism and pandeism except their arbitrary negations". Hartshorne formulated God as a being who could become "more perfect": God has absolute perfection in categories for which absolute perfection is possible, and relative perfection (i. e., is superior to all others) in categories for which perfection cannot be precisely determined. In religion The Reverend Zen Master Soyen Shaku was the first Zen Buddhist Abbot to tour the United States in 1905–6. He wrote a series of essays collected in the book Zen For Americans. In the essay titled "The God Conception of Buddhism," he attempts to explain how a Buddhist looks at the Ultimate without an anthropomorphic God figure while still being able to relate to the term God in a Buddhist sense: At the outset, let me state that Buddhism is not atheistic as the term is ordinarily understood. It has certainly a God, the highest reality and truth, through which and in which this universe exists. However, the followers of Buddhism usually avoid the term God, for it savors so much of Christianity, whose spirit is not always exactly in accord with the Buddhist interpretation of religious experience. Again, Buddhism is not pantheistic in the sense that it identifies the universe with God. On the other hand, the Buddhist God is absolute and transcendent; this world, being merely its manifestation, is necessarily fragmental and imperfect. To define more exactly the Buddhist notion of the highest being, it may be convenient to borrow the term very happily coined by a modern German scholar, "panentheism," according to which God is πᾶν καὶ ἕν (all and one) and more than the totality of existence. The essay then goes on to explain first utilizing the term "God" for the American audience to get an initial understanding of what he means by "panentheism," and then discusses the terms that Buddhism uses in place of "God" such as Dharmakaya, Buddha or Adi-Buddha, and Tathagata. Panentheism is also a feature of some Christian philosophical theologies and resonates strongly within the theological tradition of the Eastern Orthodox Church. It also appears in process theology. Process theological thinkers are generally regarded as unorthodox in the Christian West. Furthermore, process philosophy is widely believed[by whom?] to have paved the way for open theism, a movement that tends to associate itself primarily with the Evangelical branch of Protestantism but is also generally considered unorthodox by most evangelicals. A number of ordained Catholic writers (including Richard Rohr, David Steindl-Rast, and Thomas Keating) have suggested that panentheism is the original view of Christianity. They hold that such a view is directly supported by mystical experience and the teachings of Jesus and Paul the Apostle. Richard Rohr surmises this in his 2019 book The Universal Christ: But Paul merely took incarnationalism to its universal and logical conclusions. We see that in his bold exclamation “There is only Christ. He is everything and he is in everything” (Colossians 3:11). If I were to write that today, people would call me a pantheist (the universe is God), whereas I am really a panentheist (God lies within all things, but also transcends them), exactly like both Jesus and Paul. Similarly, David Steindl-Rast posits that Christianity's original panentheism is being revealed through contemporary mystical insight: What characterizes our moment in history is the collapse of Christian theism. Gratefulness mysticism makes us realize that Christianity never was theistic, but panentheistic. Faith in God as triune implied this from the very beginning; now we are becoming aware of it. It becomes obvious, at the same time, that we share this Trinitarian experience of divine life with all human beings as a spiritual undercurrent in all religions, an undercurrent older and more powerful than the various doctrines. At the core of interreligious dialogue flows this shared spirituality of gratefulness, a spirituality strong enough to restore to our broken world unity. This sentiment is mirrored in Thomas Keating's 1993 article, Clarifications Regarding Centering Prayer: Pantheism is usually defined as the identification of God with creation in such a way that the two are indistinguishable. Panentheism means that God is present in all creation by virtue of his omnipresence and omnipotence, sustaining every creature in being without being identified with any creature. The latter understanding is what Jesus seems to have been describing when he prays "that all might be one, Father, as we are one" and "that they may also be in us" (John 17:22). Again and again, in the Last Supper discourse, he speaks of this oneness and his intentions to send his Spirit to dwell within us. If we understand the writings of the great mystics rightly, they experience God living within them all the time. Thus the affirmation of God's transcendence must always be balanced by the affirmation of his imminence both on the natural plane and on the plane of grace. Panentheistic conceptions of God occur amongst some modern theologians. Process theology and Creation Spirituality, two recent developments in Christian theology, contain panentheistic ideas. Charles Hartshorne (1897–2000), who conjoined process theology with panentheism, maintained a lifelong membership in the Methodist church but was also a Unitarian. In later years, he joined the Austin, Texas, Unitarian Universalist congregation and was an active participant in that church. Referring to ideas such as Thomas Oord's theocosmocentrism (2010), the soft panentheism of open theism, Keith Ward's comparative theology and John Polkinghorne's critical realism (2009), Raymond Potgieter observes distinctions such as dipolar and bipolar: The former suggests two poles separated such as God influencing creation and it in turn its creator (Bangert 2006:168), whereas bipolarity completes God’s being implying interdependence between temporal and eternal poles. (Marbaniang 2011:133), in dealing with Whitehead’s approach, does not make this distinction. I use the term bipolar as a generic term to include suggestions of the structural definition of God’s transcendence and immanence; to for instance accommodate a present and future reality into which deity must reasonably fit and function, and yet maintain separation from this world and evil whilst remaining within it. Some argue that panentheism should also include the notion that God has always been related to some world or another, which denies the idea of creation out of nothing (creatio ex nihilo). Nazarene Methodist theologian Thomas Jay Oord (born 1965) advocates panentheism, but he uses the word "theocosmocentrism" to highlight the notion that God and some world or another are the primary conceptual starting blocks for eminently fruitful theology. This form of panentheism helps overcome the problem of evil and proposes that God's love for the world is essential to who God is. The Latter Day Saint movement teaches that the Light of Christ "proceeds from God through Christ and gives life and light to all things". Manichaeists, being of another gnostic sect, preached a very different doctrine in positioning the true Manichaean God against matter as well as other deities, that it described as enmeshed with the world, namely the gods of Jews, Christians, and pagans. Nevertheless, this dualistic teaching included an elaborate cosmological myth that narrates the defeat of primal man by the powers of darkness that devoured and imprisoned the particles of light. Valentinianism taught that matter came about through emanations of the supreme being, even if, to some, this event is held to be more accidental than intentional. To other gnostics, these emanations were akin to the Sephirot of the Kabbalists and deliberate manifestations of a transcendent God through a complex system of intermediaries. The earliest reference to panentheistic thought in Hindu philosophy is in a creationism contained in the later section of Rig Veda called the Purusha Sukta, which was compiled before 1100 BCE. The Purusha Sukta gives a description of the spiritual unity of the cosmos. It presents the nature of Purusha, or the cosmic being, as both immanent in the manifested world and yet transcendent. From this being the sukta holds, the original creative will proceeds, by which this vast universe is projected in space and time. The most influential and dominant school of Indian philosophy, Advaita Vedanta, rejects theism and dualism by insisting that "Brahman [ultimate reality] is without parts or attributes...one without a second." Since Brahman has no properties, contains no internal diversity and is identical with the whole reality it cannot be understood as an anthropomorphic personal God. The relationship between Brahman and the creation is often thought to be panentheistic. Panentheism is also expressed in the Bhagavad Gita. In verse IX.4, Krishna states: By Me all this universe is pervaded through My unmanifested form. All beings abide in Me but I do not abide in them. Many schools of Hindu thought espouse monistic theism, which is believed to be similar to a panentheistic viewpoint. Nimbarka's school of differential monism (Dvaitadvaita), Ramanuja's school of qualified monism (Vishistadvaita), and Saiva Siddhanta and Kashmir Shaivism are all considered to be panentheistic. Chaitanya Mahaprabhu's Gaudiya Vaishnavism, which elucidates the doctrine of Achintya Bheda Abheda (inconceivable oneness and difference), is also thought to be panentheistic. In Kashmir Shaivism, all things are believed to be a manifestation of Universal Consciousness (Cit or Brahman). So from the point of view of this school, the phenomenal world (Śakti) is real, and it exists and has its being in Consciousness (Ćit). Thus, Kashmir Shaivism also propounds theistic monism or panentheism. Shaktism, or Tantra, is regarded as an Indian prototype of panentheism. Shakti is considered to be the cosmos itself – she is the embodiment of energy and dynamism and the motivating force behind all action and existence in the material universe. Shiva is her transcendent masculine aspect, providing the divine ground of all being. "There is no Shiva without Shakti, or Shakti without Shiva. The two ... in themselves are One." Thus, it is she who becomes the time and space, the cosmos; it is she who becomes the five elements, and thus all animate life and inanimate forms. She is the primordial energy that holds all creation and destruction, all cycles of birth and death, all laws of cause and effect within herself, and yet is greater than the sum total of all these. She is transcendent but becomes immanent as the cosmos (Mula Prakriti). She, the primordial energy, directly becomes matter. While mainstream Rabbinic Judaism is classically monotheistic and follows in the footsteps of Maimonides (c. 1135–1204), the panentheistic conception of God can be found among certain mystical Jewish traditions. A leading scholar of Kabbalah, Moshe Idel, ascribes this doctrine to the kabbalistic system of Moses ben Jacob Cordovero (1522–1570), and in the eighteenth century, to the Baal Shem Tov (c. 1700–1760), founder of the Hasidic movement, as well as his contemporaries, Rabbi Dov Ber of Mezeritch (died 1772) and Menahem Mendel, the Maggid of Bar. There is some debate as to whether Isaac Luria (1534–1572) and Lurianic Kabbalah, with its doctrine of tzimtzum, can be regarded as panentheistic. According to Hasidism, the infinite Ein Sof is incorporeal and exists in a state that is both transcendent and immanent. This also appears to be the view of non-Hasidic Rabbi Chaim of Volozhin. Hasidic Judaism merges the ideal of nullification with a transcendent God via the intellectual articulation of inner dimensions through Kabbalah and with emphasis on the panentheistic divine immanence in everything. Many scholars would argue that "panentheism" is the best single-word description of the philosophical theology of Baruch Spinoza, a Jew. It is therefore no surprise that aspects of panentheism are also evident in the theology of Reconstructionist Judaism as presented in the writings of Mordecai Kaplan (1881–1983), who Spinoza strongly influenced. Many newer, contemporary Sikhs have suggested that human souls and the monotheistic God are two different realities (dualism), distinguishing it from the monistic and various shades of nondualistic philosophies of other Indian religions. However, Sikh scholars have explored nondualist exegesis of Sikh scriptures, such as Bhai Vir Singh. According to Mandair, Vir Singh interprets the Sikh scriptures as teaching nonduality. The renowned Sikh Scholar, Bhai Mani Singh, is quoted to saying that Sikhism has all the essence of Vedanta Philosophy. Historically, the Sikh symbol of Ik Oankaarhas had a monist meaning and has been reduced to simply meaning, "There is but One God", which is incorrect. Older exegesis of Sikh scripture, such as the Faridkot Teeka and Garab Ganjani Teeka, has always described Sikh Metaphysics as a non-dual, panentheistic universe. For this reason, Sikh Metaphysics has often been compared to the Non-Dual Vedanta metaphysics. The Sikh Poet, Bhai Nand Lal, often used Sufi terms to describe Sikh philosophy, talking about wahdat ul-wujud in his Persian poetry. Wahdat ul-wujud (the Unity of All Things) is a concept sometimes described as pantheism or panentheism. It is primarily associated with the Asharite Sufi scholar Ibn Arabi. Some Sufi Orders, notably the Bektashis and the Universal Sufi movement, adhere to similar panentheistic beliefs. The same is said about the Nizari Ismaili who follow panentheism according to Ismaili doctrine. The Mesoamerican empires of the Mayas, Aztecs as well as the South American Incas (Tawantinsuyu) have typically been characterized as polytheistic, with strong male and female deities. According to Charles C. Mann's history book 1491: New Revelations of the Americas Before Columbus, only the lower classes of Aztec society were polytheistic. Philosopher James Maffie has argued that Aztec metaphysics was panentheistic rather than pantheistic since Teotl was considered by Aztec philosophers to be the ultimate all-encompassing yet all-transcending force defined by its inherited duality. Native American beliefs in North America have been characterized as panentheistic in that there is an emphasis on a single, unified divine spirit that is manifest in each individual entity. (North American Native writers have also translated the word for God as the Great Mystery or as the Sacred Other). This concept is referred to by many as the Great Spirit. Philosopher J. Baird Callicott has described Lakota theology as panentheistic, in that the divine both transcends and is immanent in everything. One exception can be modern Cherokee, who are predominantly monotheistic but apparently not panentheistic. Yet in older Cherokee traditions, many observe both pantheism and panentheism and are often not beholden to exclusivity, encompassing other spiritual traditions without contradiction, a common trait among some tribes in the Americas. In the stories of Keetoowah storytellers Sequoyah Guess and Dennis Sixkiller, God is known as ᎤᏁᎳᏅᎯ, commonly pronounced "unehlanv," and visited earth in prehistoric times, but then left earth and her people to rely on themselves. This shows a parallel to Vaishnava cosmology. Konkokyo is a form of sectarian Japanese Shinto and a faith within the Shinbutsu-shūgō tradition. Traditional Shintoism holds that an impersonal spirit manifests/penetrates the material world, giving all objects consciousness and spontaneously creating a system of natural mechanisms, forces, and phenomena (Musubi). Konkokyo deviates from traditional Shintoism by holding that this spirit (Comparable to Brahman) has a personal identity and mind. This personal form is non-different from the energy itself, not residing in any particular cosmological location. In Konkokyo, this god is named "Tenchi Kane no Kami-Sama," which can be translated directly as "Spirit of the gilded/golden heavens and earth." Though practitioners of Konkokyo are small in number (~300,000 globally), the sect has birthed or influenced a multiplicity of Japanese New Religions, such as Oomoto. Many of these faiths carry on the Panentheistic views of Konkokyo.[citation needed] See also Citations General and cited references External links |
======================================== |
[SOURCE: https://www.bbc.com/afaanoromoo] | [TOKENS: 3186] |
BBC News, Afaan Oromoo - Oduu Isin hin darbiin Atileet Fayyisaa Leellisaa balaa konkolaataa Sambata darbe konkolaataa biraan walitti bu'uun isaan mudateen miidhamni akka isarra hin geenye ibse. Masfiin ijoollummaatti gara US deemuun waldhaansa baqaqsanii yaaluu onnee lama argatanii lubbuun oolan, amma ammoo hakiima isaan waldhaane waliin onnee dhukkabsattoota biroo waldhaanu. Pirezidaantiin Ameerikaa Doonaald Tiraamp Iraan waliin "waliigaltee haqa qabeessa" ta'erra ga'uu waan hin dandeenyeef gara filannoo waraanaatti ce'uuf akka jiran yaadaa kennaniiru. Ministirri Muummee Abiy Ahimad tibba kana Godinaalee Wallaggaa adda addaa keessatti hojiilee misoomaa hojjetaman yoo eebbisan irra deebiin hidhattootaaf waamicha araaraa dhiyeessaniiru. WBO waamichi kun ''garaarraa miti'' jechuun deebii kenneera. Abbaan Alangaa Godina Larargee Lixaas kantiibaa duraanii namoota 30 goyyomsuun shakkame Labsii Yakkoota Malaammaltummaa 881/2007 Keewwata 32'n himate. Pirezidaantiin US Doonaald Tiraamp sanadoonni waa'ee UFfi qaamolee eliyaans jedhamanii ejansiiwwan mootummaa akka gadi lakkisan ajaja kennuuf akka jiru beeksise. Iraan marii Ameerikaa waliin taasisaa jirtu waliigalteerra akka geessuuf Pirezidant Tiraamp haleellaa qilleensaan murtaa’aa ta’e irratti gaggeessuu yaadaa akka jiru Wool Istiriit Joornaal gabaase. Iraan xalayaa Barreessaa Olaanaa Dhaabbata Biyoota Gamtoomanii Antooniyoo Guutereesiif fi Mana-maree Nageenyaa UN'f barreessiteen, weerara waraanaa kamiifuu deebii "murteessaa fi madaalawaa" akka kennitu, mirga ofirraa ittisuu Chaartara UN jalatti qabdu akka fayyadamtu ibsite. Hayilee Fidaa Kumaa! Maqaa Qubee Afaan Oromoo fi siyaasa ammayyaa Itoophiyaa keessatti kaleessaa hanga har’aatti ka’u, boorus hin dagatamne. Seenaa biyyaa qoratee, siyaasaafi falaasama addunyaa baratee ofiin seenaa hojjetee darbeera. Taphoota torbee 27ffaa dursee Mikeel Arteettaa fi Peep Gaardiyoolaa waayee waancaa Piriimiyeer Ligii injifachuu yaada kennaniiru. Tapha Piriimer Liigii Ingiliiz torban 27ffaan Darbii Landan kaabaan Tootenhaam Liigicha dursaa kan jiru Arsenaal simatee keessummeessa. Sadarkaa lamaffaarra kan jiru Maanchester Siitiin ammoo Niiwukasil simatee keessummeessa. YouTube BBC Afaan Oromootti makamaa YouTube BBC Afaan Oromoo Pirezidantiin Turkii Reseeph Xayyiib Erdohaan dhihoo Itoophiyaatti imaluun MM Abiy Ahimad waliin mari'ataniiru. Tibba kana viidiyoon yakkaa saal-qunnamtii biyyoota hedduu keessatti dubbii ijoo ta'eera. Gaazexeessaa gameessi Isaayas Hordofaa gaazexeessitoota Afaan Oromoo hangafoota ta'an keessaa tokko. Nyaatni qooccoo yookiin warqeen bakkeewwan kibbaaf kibba lixa Itoophiyaa keessatti haalaan jaalatamaadha. Ertiraan loltootashee akka daangaa Itoophiyaa keessaa baastuuf gaaffii mootummaan Itoophiyaa dhiheesseef deebii kennite. Bara 2023 Eriik miidiyaa hawaasaa poorn (viidiyoo saal-quunnamtii ) ilaaluuf otoo daddabarsuu sekondii muraasa booda waan argeen naasuun gogee hafe. Xinxalaa siyaasaa Jawaar Mohaammad dhimma muddama Gaanfa Afrikaa, siyaasa Itoophiyaafi Oromoo irratti BBC waliin turtii taasise. Ummanni Konsoo fi Booranaa tokkuma jedhu maanguddoonsaanii. Ameerikaan addunyaarraa biyya humna waraanaa cimaa qabaachuun gita hin qabneedha. US fi Iraan jaarsumma Turkii, Kataar fi Masiriin Oomaanitti marii taa'uuf jedhu. MM Abiy yeroo jalqabaaf Tiraampiif deebisan. Bulchaan Itoophiyaa parlaamaatti argamuun waraana Ertiraa ifatti himataniiru. BBC Afaan Oromoo Chaanaalii WhatsApp gubbaan hordofaa Maaltu haasa'ama? Torbanoota muraasa darban dureewwan biyyoota Awurooppaa, Eeshiyaa, Ameerikaafi biyyoota Arabaa gara Itoophiyaa imaluun daawwataniiru, daawwataas jiru. 20:49 EAT - 21:04 EAT Mootummaaan Itoophiyaa Ertiraan loltootashee daangaa Itoophiyaa keessa seeraan ala jiran akka baastu waamicha dhiheesse. Ministirri Dhimma Alaa duraanii Itoophiyaa Obbo Gadduu Andaargaachoo Ministirri Muummee Abiy Ahmad dararaa loltoota Ertiraa akka dhaabsisaniif gara Ertiraatti ergeera jechuu gatii dhabsiisan. Waraana kaaba Itoophiyaatti waggoota lamaaf gaggeeffamerratti mootummaa Itoophiyaa tumsuun humnoota Tigraay lolaa kan turan loltoonni Ertiraa, warshaalee dabalatee Tigraay keessaa qabeenya guddaa saamuu isaanii MM Abiy paarlaamaa irratti dubbatan. Amma karaa kanaan Oduuwwan Ammee ijoo, xiinxalaa fi odeessaalee adda addaa karaa BBC News Afaan Oromootiin kallattiidhaan WhatsApp keessan irratti argachuu dandeessu. Itoophiyaan yeroo Gaanfa Afrikaatti muddamni cimetti Sa'udi Arabiyaa waliin hariiroo cimsachuuf marii taasifte. Ministirri Muummee Itoophiyaa Abiy Ahimad waraanni Ertiraa yeroo waraana Kaaba Itoophiyaa badii garagaraa raawwachuun himatanii yeroo duraaf ifatti himatan. Bulchiinsi Yeroo Tigraay walitti bu'iinsi tibbana mootummaa Federaalaa fi humnoota Tigraay gidduutti uumame "kan ta'uu hin qabne ture" jedhe Dura ta'aan Komishinii Gamtaa Afrikaa Maahimud Alii Yusuuf, mootummaan federaalaa fi TPLF haala naannoo Tigraay keessa jiru hammeessuurraa "cimsanii akka of qusatan" gaafate. Paartiin mormituu Gorsaan Dhimmoota Baha Afrikaa Ministira Muummee Obbo Getaachoo Raddaatiin hogganamu Dimokiraatawaa Simirat Tigraay (Simirati), qaama "boodatti hafaa" jechuun waamu "haleellaa bal'aa naannoo Affaariifi Ganda waliin gahu" gaggeessuudhaaf "qophii xumuree dhukaasa" baneera jechuun himate. Sirni kabaja waggaa 90ffaa hundeeffama Humni Qilleensaa Itoophiyaarratti xiyyaarota waraanaa nama maleeyyii agarsiifaman keessaa kan biyyattiin Raashiyaarraa argatte Sukhoi jedhamu qilleensarratti mul'ateera. Ispoortii Arsenaal kilaba Piriimer Liigii keessaa dhumarra jiru Wolvis goolii 2 fi 0 dhaan osoo dursuu taphichi goolii 2 fi 2n xumurameera. Ga'umsi akkanaa dhuguma xiinsammuu kilaba waancaaf onnatee ta'uun gaaffii namatti ta'a. Leenjisaan Arsenaal Mikeel Arteettaa dhiyeenya kana "taphataa cimaan" Arsenaal dhiyeenya kan bitame jechuun dubbatee ture. Ganna darbe kan bitame Maartiin Zubimeendii jechuusaati. Waggaa tokko ture dubartiin ganna 33 tulluu Ostiriyaa isa guddaa irratti cabbooftee kan duute, jaalalleenshee tulluu irratti ishee dhiisuufi du'asheef sababa tahuun himatameera. Tirent Aleksaandar-Arnold jibbiinsi sanyummaa Viniishiyes Juuniyer irratti raawwatame jedhame ''kabaja kubbaa miillaa kan gadi buusu,'' jedhe, leenjisaan Benfikaa Joosee Moriinoo ammoo taatee kanarratti yaada kenneen guddoo qeeqame. Mikeel Arteettaa miidhama taphattoota sarara gidduutti isa mudatan hordofee Bukaayoo Saakaa sarara duraan irratti beekamuun ala gahee bakka yeroo baayyee lakkoofsa 10 jedhamuun beekamutti fidee fayyadamuuf yaadaa jira. Dubbiin Arsenaal 'amalliifi tulluun hin godaanu' isa jedhan ta'aa jira fakkaata- gabatee Piriimerliigii dursaa turee galgala keessa duubatti deebi'uu. Torban tokko dura qabxii sagaliin dursee lafa dhiitaa ture. Peep Gaardiyoolaan duubaan Arsenaalitti qaxxisuun morkii waancaa Piriimer Liigii mo'achuuf taasifamu akka cimsu abdii qabu ibseera. Maanchistar Siitiin kanuma godhaa jira. Maanchistar Yunaayitid akka Firank Ilett rifeensa isaa sirreeffatu gochuu hin dandeenye. Tapha Liiverpuul fi Maanchistar Siitii Anfiildi irratti Dilbata galgala gaggeeffame keessatti diraamaa hedduutu mul'ate. Diraamaan tokko waa'ee goolii ishee dhumaati. Maanchistar Siitiin goolii sadaffaa galche se'ee gammachuun mirqaane. Paawondii miiliyoona 79n Eyintiraakt Firaankfart irraa barana gara Liiverpuul kan seene Huugoo Ekitikee abdii kilabichaa ta'eera. Luka tokkoon dhaabachuun sammuu fi qaama keenya guddoo fayyada. Baayyee waan nu jabeessuuf tasa akka hin kufnellee nu gargaara. Akkasumas sammuu keenya gabbisuun yaadannoo keenya guddisa. Piriimer Liigii torban 25ffaan tapohni guddaan Dilabata LKiiverpuul fi Maanchester Siitii gidduutti eegamu gaggeeffamaa jira. Maanchester Siitiin mo'annaan garaagarummaa dursaa Liigichaa Arsenaal waliin qabu gara ja'aatti xiqqeeffata. Daataa qusachuu barbaadduu? Daataa qusachuuf yeroo tokko tokko fuula barreeffama qofaa keenyaa ilaalaa. Walitti bu'iinsa Jiruufi Aartii Bitaa mirga Ajaa'iba Miidiyaa hawaasaatiin nu hordofaa Barumsaaf Baay'ee kan jaalatame © 2026 BBC. Qabiyyeewwan maddawwan alaa irraa ta'aniif BBCn itti gaafatamaa miti. Gara geessituu alaatti akkaataa itti hojjennu dubbisi. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/World#cite_note-43] | [TOKENS: 5641] |
Contents World The world is the totality of entities, the whole of reality, or everything that exists. The nature of the world has been conceptualized differently in different fields. Some conceptions see the world as unique, while others talk of a "plurality of worlds". Some treat the world as one simple object, while others analyze the world as a complex made up of parts. In scientific cosmology, the world or universe is commonly defined as "the totality of all space and time; all that is, has been, and will be". Theories of modality talk of possible worlds as complete and consistent ways how things could have been. Phenomenology, starting from the horizon of co-given objects present in the periphery of every experience, defines the world as the biggest horizon, or the "horizon of all horizons". In philosophy of mind, the world is contrasted with the mind as that which is represented by the mind. Theology conceptualizes the world in relation to God, for example, as God's creation, as identical to God, or as the two being interdependent. In religions, there is a tendency to downgrade the material or sensory world in favor of a spiritual world to be sought through religious practice. A comprehensive representation of the world and our place in it, as is found in religions, is known as a worldview. Cosmogony is the field that studies the origin or creation of the world, while eschatology refers to the science or doctrine of the last things or of the end of the world. In various contexts, the term "world" takes a more restricted meaning associated, for example, with the Earth and all life on it, with humanity as a whole, or with an international or intercontinental scope. In this sense, world history refers to the history of humanity as a whole, and world politics is the discipline of political science studying issues that transcend nations and continents. Other examples include terms such as "world religion", "world language", "world government", "world war", "world population", "world economy", or "world championship". Etymology The English word world comes from the Old English weorold. The Old English is a reflex of the Common Germanic *weraldiz, a compound of weraz 'man' and aldiz 'age', thus literally meaning roughly 'age of man'; this word led to Old Frisian warld, Old Saxon werold, Old Dutch werolt, Old High German weralt, and Old Norse verǫld. The corresponding word in Latin is mundus, literally 'clean, elegant', itself a loan translation of Greek cosmos 'orderly arrangement'. While the Germanic word thus reflects a mythological notion of a "domain of Man" (compare Midgard), presumably as opposed to the divine sphere on the one hand and the chthonic sphere of the underworld on the other, the Greco-Latin term expresses a notion of creation as an act of establishing order out of chaos. Conceptions Different fields often work with quite different conceptions of the essential features associated with the term "world". Some conceptions see the world as unique: there can be no more than one world. Others talk of a "plurality of worlds". Some see worlds as complex things composed of many substances as their parts while others hold that worlds are simple in the sense that there is only one substance: the world as a whole. Some characterize worlds in terms of objective spacetime while others define them relative to the horizon present in each experience. These different characterizations are not always exclusive: it may be possible to combine some without leading to a contradiction. Most of them agree that worlds are unified totalities. Monism is a thesis about oneness: that only one thing exists in a certain sense. The denial of monism is pluralism, the thesis that, in a certain sense, more than one thing exists. There are many forms of monism and pluralism, but in relation to the world as a whole, two are of special interest: existence monism/pluralism and priority monism/pluralism. Existence monism states that the world is the only concrete object there is. This means that all the concrete "objects" we encounter in our daily lives, including apples, cars and ourselves, are not truly objects in a strict sense. Instead, they are just dependent aspects of the world-object. Such a world-object is simple in the sense that it does not have any genuine parts. For this reason, it has also been referred to as "blobject" since it lacks an internal structure like a blob. Priority monism allows that there are other concrete objects besides the world. But it holds that these objects do not have the most fundamental form of existence, that they somehow depend on the existence of the world. The corresponding forms of pluralism state that the world is complex in the sense that it is made up of concrete, independent objects. Scientific cosmology can be defined as the science of the universe as a whole. In it, the terms "universe" and "cosmos" are usually used as synonyms for the term "world". One common definition of the world/universe found in this field is as "[t]he totality of all space and time; all that is, has been, and will be". Some definitions emphasize that there are two other aspects to the universe besides spacetime: forms of energy or matter, like stars and particles, and laws of nature. World-conceptions in this field differ both concerning their notion of spacetime and of the contents of spacetime. The theory of relativity plays a central role in modern cosmology and its conception of space and time. A difference from its predecessors is that it conceives space and time not as distinct dimensions but as a single four-dimensional manifold called spacetime. This can be seen in special relativity in relation to the Minkowski metric, which includes both spatial and temporal components in its definition of distance. General relativity goes one step further by integrating the concept of mass into the concept of spacetime as its curvature. Quantum cosmology uses a classical notion of spacetime and conceives the whole world as one big wave function expressing the probability of finding particles in a given location. The world-concept plays a role in many modern theories of modality, sometimes in the form of possible worlds. A possible world is a complete and consistent way how things could have been. The actual world is a possible world since the way things are is a way things could have been. There are many other ways things could have been besides how they actually are. For example, Hillary Clinton did not win the 2016 US election, but she could have won. So there is a possible world in which she did. There is a vast number of possible worlds, one corresponding to each such difference, no matter how small or big, as long as no outright contradictions are introduced this way. Possible worlds are often conceived as abstract objects, for example, in terms of non-obtaining states of affairs or as maximally consistent sets of propositions. On such a view, they can even be seen as belonging to the actual world. Another way to conceive possible worlds, made famous by David Lewis, is as concrete entities. On this conception, there is no important difference between the actual world and possible worlds: both are conceived as concrete, inclusive and spatiotemporally connected. The only difference is that the actual world is the world we live in, while other possible worlds are not inhabited by us but by our counterparts. Everything within a world is spatiotemporally connected to everything else but the different worlds do not share a common spacetime: They are spatiotemporally isolated from each other. This is what makes them separate worlds. It has been suggested that, besides possible worlds, there are also impossible worlds. Possible worlds are ways things could have been, so impossible worlds are ways things could not have been. Such worlds involve a contradiction, like a world in which Hillary Clinton both won and lost the 2016 US election. Both possible and impossible worlds have in common the idea that they are totalities of their constituents. Within phenomenology, worlds are defined in terms of horizons of experiences. When we perceive an object, like a house, we do not just experience this object at the center of our attention but also various other objects surrounding it, given in the periphery. The term "horizon" refers to these co-given objects, which are usually experienced only in a vague, indeterminate manner. The perception of a house involves various horizons, corresponding to the neighborhood, the city, the country, the Earth, etc. In this context, the world is the biggest horizon or the "horizon of all horizons". It is common among phenomenologists to understand the world not just as a spatiotemporal collection of objects but as additionally incorporating various other relations between these objects. These relations include, for example, indication-relations that help us anticipate one object given the appearances of another object and means-end-relations or functional involvements relevant for practical concerns. In philosophy of mind, the term "world" is commonly used in contrast to the term "mind" as that which is represented by the mind. This is sometimes expressed by stating that there is a gap between mind and world and that this gap needs to be overcome for representation to be successful. One problem in philosophy of mind is to explain how the mind is able to bridge this gap and to enter into genuine mind-world-relations, for example, in the form of perception, knowledge or action. This is necessary for the world to be able to rationally constrain the activity of the mind. According to a realist position, the world is something distinct and independent from the mind. Idealists conceive of the world as partially or fully determined by the mind. Immanuel Kant's transcendental idealism, for example, posits that the spatiotemporal structure of the world is imposed by the mind on reality but lacks independent existence otherwise. A more radical idealist conception of the world can be found in Berkeley's subjective idealism, which holds that the world as a whole, including all everyday objects like tables, cats, trees and ourselves, "consists of nothing but minds and ideas". Different theological positions hold different conceptions of the world based on its relation to God. Classical theism states that God is wholly distinct from the world. But the world depends for its existence on God, both because God created the world and because He maintains or conserves it. This is sometimes understood in analogy to how humans create and conserve ideas in their imagination, with the difference being that the divine mind is vastly more powerful. On such a view, God has absolute, ultimate reality in contrast to the lower ontological status ascribed to the world. God's involvement in the world is often understood along the lines of a personal, benevolent God who looks after and guides His creation. Deists agree with theists that God created the world but deny any subsequent, personal involvement in it. Pantheists reject the separation between God and world. Instead, they claim that the two are identical. This means that there is nothing to the world that does not belong to God and that there is nothing to God beyond what is found in the world. Panentheism constitutes a middle ground between theism and pantheism. Against theism, it holds that God and the world are interrelated and depend on each other. Against pantheism, it holds that there is no outright identity between the two. History of philosophy In philosophy, the term world has several possible meanings. In some contexts, it refers to everything that makes up reality or the physical universe. In others, it can mean have a specific ontological sense (see world disclosure). While clarifying the concept of world has arguably always been among the basic tasks of Western philosophy, this theme appears to have been raised explicitly only at the start of the twentieth century, Plato is well known for his theory of forms, which posits the existence of two different worlds: the sensible world and the intelligible world. The sensible world is the world we live in, filled with changing physical things we can see, touch and interact with. The intelligible world is the world of invisible, eternal, changeless forms like goodness, beauty, unity and sameness. Plato ascribes a lower ontological status to the sensible world, which only imitates the world of forms. This is due to the fact that physical things exist only to the extent that they participate in the forms that characterize them, while the forms themselves have an independent manner of existence. In this sense, the sensible world is a mere replication of the perfect exemplars found in the world of forms: it never lives up to the original. In the allegory of the cave, Plato compares the physical things we are familiar with to mere shadows of the real things. But not knowing the difference, the prisoners in the cave mistake the shadows for the real things. Two definitions that were both put forward in the 1920s, however, suggest the range of available opinion. "The world is everything that is the case", wrote Ludwig Wittgenstein in his influential Tractatus Logico-Philosophicus, first published in 1921. Martin Heidegger, meanwhile, argued that "the surrounding world is different for each of us, and notwithstanding that we move about in a common world". "World" is one of the key terms in Eugen Fink's philosophy. He thinks that there is a misguided tendency in western philosophy to understand the world as one enormously big thing containing all the small everyday things we are familiar with. He sees this view as a form of forgetfulness of the world and tries to oppose it by what he calls the "cosmological difference": the difference between the world and the inner-worldly things it contains. On his view, the world is the totality of the inner-worldly things that transcends them. It is itself groundless but it provides a ground for things. It therefore cannot be identified with a mere container. Instead, the world gives appearance to inner-worldly things, it provides them with a place, a beginning and an end. One difficulty in investigating the world is that we never encounter it since it is not just one more thing that appears to us. This is why Fink uses the notion of play or playing to elucidate the nature of the world. He sees play as a symbol of the world that is both part of it and that represents it. Play usually comes with a form of imaginary play-world involving various things relevant to the play. But just like the play is more than the imaginary realities appearing in it so the world is more than the actual things appearing in it. The concept of worlds plays a central role in Nelson Goodman's late philosophy. He argues that we need to posit different worlds in order to account for the fact that there are different incompatible truths found in reality. Two truths are incompatible if they ascribe incompatible properties to the same thing. This happens, for example, when we assert both that the earth moves and that the earth is at rest. These incompatible truths correspond to two different ways of describing the world: heliocentrism and geocentrism. Goodman terms such descriptions "world versions". He holds a correspondence theory of truth: a world version is true if it corresponds to a world. Incompatible true world versions correspond to different worlds. It is common for theories of modality to posit the existence of a plurality of possible worlds. But Goodman's theory is different since it posits a plurality not of possible but of actual worlds. Such a position is in danger of involving a contradiction: there cannot be a plurality of actual worlds if worlds are defined as maximally inclusive wholes. This danger may be avoided by interpreting Goodman's world-concept not as maximally inclusive wholes in the absolute sense but in relation to its corresponding world-version: a world contains all and only the entities that its world-version describes. Religion Mythological cosmologies depict the world as centered on an axis mundi and delimited by a boundary such as a world ocean, a world serpent or similar. Hinduism constitutes a family of religious-philosophical views. These views present perspectives on the nature and role of the world. Samkhya philosophy, for example, is a metaphysical dualism that understands reality as comprising 2 parts: purusha and prakriti. The term "purusha" stands for the individual conscious self that each of "us" possesses. Prakriti, on the other hand, is the 1 world inhabited by all these selves. Samkhya understands this world as a world of matter governed by the law of cause and effect. The term "matter" is understood in a sense in this tradition including physical and mental aspects. This is reflected in the doctrine of tattvas, according to which prakriti is made up of 23 principles or elements of reality. These principles include physical elements, like water or earth, and mental aspects, like intelligence or sense-impressions. The relation between purusha and prakriti is conceived as 1 of observation: purusha is the conscious self aware of the world of prakriti and does not causally interact with it. A conception of the world is present in Advaita Vedanta, the monist school among the Vedanta schools. Unlike the realist position defended in Samkhya philosophy, Advaita Vedanta sees the world of multiplicity as an illusion, referred to as Maya. This illusion includes impression of existing as separate experiencing selfs called Jivas. Instead, Advaita Vedanta teaches that on the most fundamental level of reality, referred to as Brahman, there exists no plurality or difference. All there is is 1 all-encompassing self: Atman. Ignorance is seen as the source of this illusion, which results in bondage to the world of mere appearances. Liberation is possible in the course of overcoming this illusion by acquiring the knowledge of Brahman, according to Advaita Vedanta. Contemptus mundi is the name given to the belief that the world, in all its vanity, is nothing more than a futile attempt to hide from God by stifling our desire for the good and the holy. This view has been characterised as a "pastoral of fear" by historian Jean Delumeau. "The world, the flesh, and the devil" is a traditional division of the sources of temptation. Orbis Catholicus is a Latin phrase meaning "Catholic world", per the expression Urbi et Orbi, and refers to that area of Christendom under papal supremacy. In Islam, the term "dunya" is used for the world. Its meaning is derived from the root word "dana", a term for "near". It is associated with the temporal, sensory world and earthly concerns, i.e. with this world in contrast to the spiritual world. Religious teachings warn of a tendency to seek happiness in this world and advise a more ascetic lifestyle concerned with the afterlife. Other strands in Islam recommend a balanced approach. In Mandaean cosmology, the world or earthly realm is known as Tibil. It is separated from the World of Light (alma d-nhūra) above and the World of Darkness (alma d-hšuka) below by aether (ayar). Related terms and problems A worldview is a comprehensive representation of the world and our place in it. As a representation, it is a subjective perspective of the world and thereby different from the world it represents. All higher animals need to represent their environment in some way in order to navigate it. But it has been argued that only humans possess a representation encompassing enough to merit the term "worldview". Philosophers of worldviews commonly hold that the understanding of any object depends on a worldview constituting the background on which this understanding can take place. This may affect not just our intellectual understanding of the object in question but the experience of it in general. It is therefore impossible to assess one's worldview from a neutral perspective since this assessment already presupposes the worldview as its background. Some hold that each worldview is based on a single hypothesis that promises to solve all the problems of our existence we may encounter. On this interpretation, the term is closely associated to the worldviews given by different religions. Worldviews offer orientation not just in theoretical matters but also in practical matters. For this reason, they usually include answers to the question of the meaning of life and other evaluative components about what matters and how we should act. A worldview can be unique to one individual but worldviews are usually shared by many people within a certain culture or religion. The idea that there exist many different worlds is found in various fields. For example, theories of modality talk about a plurality of possible worlds and the many-worlds interpretation of quantum mechanics carries this reference even in its name. Talk of different worlds is also common in everyday language, for example, with reference to the world of music, the world of business, the world of football, the world of experience or the Asian world. But at the same time, worlds are usually defined as all-inclusive totalities. This seems to contradict the very idea of a plurality of worlds since if a world is total and all-inclusive then it cannot have anything outside itself. Understood this way, a world can neither have other worlds besides itself or be part of something bigger. One way to resolve this paradox while holding onto the notion of a plurality of worlds is to restrict the sense in which worlds are totalities. On this view, worlds are not totalities in an absolute sense. This might be even understood in the sense that, strictly speaking, there are no worlds at all. Another approach understands worlds in a schematic sense: as context-dependent expressions that stand for the current domain of discourse. So in the expression "Around the World in Eighty Days", the term "world" refers to the earth while in the colonial expression "the New World" it refers to the landmass of North and South America. Cosmogony is the field that studies the origin or creation of the world. This includes both scientific cosmogony and creation myths found in various religions. The dominant theory in scientific cosmogony is the Big Bang theory, according to which both space, time and matter have their origin in one initial singularity occurring about 13.8 billion years ago. This singularity was followed by an expansion that allowed the universe to sufficiently cool down for the formation of subatomic particles and later atoms. These initial elements formed giant clouds, which would then coalesce into stars and galaxies. Non-scientific creation myths are found in many cultures and are often enacted in rituals expressing their symbolic meaning. They can be categorized concerning their contents. Types often found include creation from nothing, from chaos or from a cosmic egg. Eschatology refers to the science or doctrine of the last things or of the end of the world. It is traditionally associated with religion, specifically with the Abrahamic religions. In this form, it may include teachings both of the end of each individual human life and of the end of the world as a whole. But it has been applied to other fields as well, for example, in the form of physical eschatology, which includes scientifically based speculations about the far future of the universe. According to some models, there will be a Big Crunch in which the whole universe collapses back into a singularity, possibly resulting in a second Big Bang afterward. But current astronomical evidence seems to suggest that our universe will continue to expand indefinitely. World history studies the world from a historical perspective. Unlike other approaches to history, it employs a global viewpoint. It deals less with individual nations and civilizations, which it usually approaches at a high level of abstraction. Instead, it concentrates on wider regions and zones of interaction, often interested in how people, goods and ideas move from one region to another. It includes comparisons of different societies and civilizations as well as considering wide-ranging developments with a long-term global impact like the process of industrialization. Contemporary world history is dominated by three main research paradigms determining the periodization into different epochs. One is based on productive relations between humans and nature. The two most important changes in history in this respect were the introduction of agriculture and husbandry concerning the production of food, which started around 10,000 to 8,000 BCE and is sometimes termed the Neolithic Revolution, and the Industrial Revolution, which started around 1760 CE and involved the transition from manual to industrial manufacturing. Another paradigm, focusing on culture and religion instead, is based on Karl Jaspers' theories about the Axial Age, a time in which various new forms of religious and philosophical thoughts appeared in several separate parts of the world around the time between 800 and 200 BCE. A third periodization is based on the relations between civilizations and societies. According to this paradigm, history can be divided into three periods in relation to the dominant region in the world: Middle Eastern dominance before 500 BCE, Eurasian cultural balance until 1500 CE and Western dominance since 1500 CE. Big History employs an even wider framework than world history by putting human history into the context of the history of the universe as a whole. It starts with the Big Bang and traces the formation of galaxies, the Solar System, the Earth, its geological eras, the evolution of life and humans until the present day. World politics, also referred to as global politics or international relations, is the discipline of political science studying issues of interest to the world that transcend nations and continents. It aims to explain complex patterns found in the social world that are often related to the pursuit of power, order and justice, usually in the context of globalization. It focuses not just on the relations between nation-states but also considers other transnational actors, like multinational corporations, terrorist groups, or non-governmental organizations. For example, it tries to explain events such as the September 11 attacks, the 2003 invasion of Iraq or the 2008 financial crisis. Various theories have been proposed in order to deal with the complexity involved in formulating such explanations. These theories are sometimes divided into realism, liberalism and constructivism. Realists see nation-states as the main actors in world politics. They constitute an anarchical international system without any overarching power to control their behavior. They are seen as sovereign agents that, determined by human nature, act according to their national self-interest. Military force may play an important role in the ensuing struggle for power between states, but diplomacy and cooperation are also key mechanisms for nations to achieve their goals. Liberalists acknowledge the importance of states but they also emphasize the role of transnational actors, like the United Nations or the World Trade Organization. They see humans as perfectible and stress the role of democracy in this process. The emergent order in world politics, on this perspective, is more complex than a mere balance of power since more different agents and interests are involved in its production. Constructivism ascribes more importance to the agency of individual humans than realism and liberalism. It understands the social world as a construction of the people living in it. This leads to an emphasis on the possibility of change. If the international system is an anarchy of nation-states, as the realists hold, then this is only so because we made it this way and may change since this is not prefigured by human nature, according to the constructivists. See also References External links Africa Antarctica Asia Australia Europe North America South America Afro-Eurasia Americas Eurasia Oceania |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_ref-18] | [TOKENS: 1793] |
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Video_game_skin] | [TOKENS: 586] |
Contents Theme (computing) In computing, a theme is a preset package containing graphical appearance and functionality details. A theme usually comprises a set of shapes and colors for the graphical control elements, the window decoration and the window. Themes are used to customize the look and feel of a piece of computer software or of an operating system. Also known as a skin (or visual style in Windows XP) it is a custom graphical appearance preset package achieved by the use of a graphical user interface (GUI) that can be applied to specific computer software, operating system, and websites to suit the purpose, topic, or tastes of different users. As such, a skin can completely change the look and feel and navigation interface of a piece of application software or operating system. Software that is capable of having a skin applied is referred to as being skinnable, and the process of writing or applying such a skin is known as skinning. Applying a skin changes a piece of software's look and feel—some skins merely make the program more aesthetically pleasing, but others can rearrange elements of the interface, potentially making the program easier to use. Use Themes are often used to change the look and feel of a wide range of things at once, which makes them much less granular than allowing the user to set each option individually. For example, users might want the window-borders from a particular theme, but installing it would also alter the desktop background. One method for dealing with this is to allow the user to select which parts of the theme they want to load; for example in Windows 98, users could load the background and screensaver from a theme, but leave the icons and sounds untouched. Themed systems Firefox and Google Chrome either support or supported a form of theme. Firefox (and its sibling Thunderbird) supports themes either through lightweight themes (formerly Personas). Google Chrome version 3.0 or later allows themes to alter the appearance of the browser. Internet Explorer 5 and its immediate successor allowed the background picture of their toolbars to be customized. The most popular skins are for instant messaging clients, media center, and media player software, such as Trillian and Winamp, due to the association with fun that such programs try to encourage. Standard interface Some platforms support changing the standard interface, including most using the X Window System. For those that do not, programs can add the functionality, like WindowBlinds for Microsoft Windows and ShapeShifter for macOS. Websites Many websites are skinnable, particularly those that provide social capabilities. Some sites provide skins that make primarily cosmetic changes, while some—such as H2G2—offer skins that make major changes to page layout. As with standalone software interfaces, this is facilitated by the underlying technology of the website—XML and XSLT, for instance, facilitate major changes of layout, while CSS can easily produce different visual styles. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Scholarpedia] | [TOKENS: 700] |
Contents Scholarpedia Scholarpedia is an English-language wiki-based online encyclopedia with features commonly associated with open-access online academic journals, which aims to have quality content in science and medicine. Scholarpedia articles are written by invited or approved expert authors and are subject to peer review. Scholarpedia lists the real names and affiliations of all authors, curators and editors involved in an article: however, the peer review process (which can suggest changes or additions, and has to be satisfied before an article can appear) is anonymous. Scholarpedia articles are stored in an online repository, and can be cited as conventional journal articles (Scholarpedia has the ISSN number ISSN 1941-6016). Scholarpedia's citation system includes support for revision numbers. The project was created in February 2006 by Eugene M. Izhikevich, while he was a researcher at the Neurosciences Institute, San Diego, California. Izhikevich also serves as the encyclopedia's editor-in-chief. Scope Scholarpedia content is grouped into separate "encyclopedias". As of August 2023[update], seven of these are described as "focal areas": Other encyclopedias include diverse areas such as play science and models of brain disorders. As of November 2018[update], Scholarpedia had 1,804 content pages and 18,149 registered users, while as of November 2021[update], it had 1,812 peer-reviewed articles. The article creation count in 2024 was 7 Funding Scholarpedia's maintenance and server costs is currently funded by Brain Corporation, a robotics company which Izhikevich is the co-founder and CEO of. As stated on Scholarpedia's Frequently Asked Questions page, the company is also able to "benefit from Scholarpedia's extensive coverage of topics in computational neuroscience". Authorship To ensure that the articles are written by experts, authors of the various articles in Scholarpedia are either invited by the editor-in-chief or other curators, or selected by a public election. For example, Jimmy Wales and Larry Sanger were nominated for the article on Wikipedia. As of May 2009[update], the list of authors included four Fields Medalists and sixteen Nobel Prize winners. Registered users must provide their full real name, and a recognized affiliation to an academic institution. Only registered users can edit an article, and those edits are subject to approval by the curator of the article, who is typically the author. Curatorship is transferable. Users have a curator index attribute which is incremented or decremented by various activities and which affects the user's capabilities on the website. After October 20, 2011, anyone can propose an article for Scholarpedia, but articles must be sponsored by Editors or Curators before the article can be published. Copyright Articles are available online without charge for non-commercial use, but may not be copied in bulk. [citation needed] Authors are credited on the article page and in the suggested citation formats. In January 2008 Scholarpedia changed their licensing policy and started accepting articles under the GNU Free Documentation License and the Creative Commons Attribution-Noncommercial-ShareAlike 3.0 license, in addition to the earlier system in which the author gives a non-exclusive license directly to Scholarpedia. Software Scholarpedia uses the same wiki engine as Wikipedia, MediaWiki, with modifications to support voting on revisions. The software's development is done privately. See also References External links |
======================================== |
[SOURCE: https://www.bbc.com/amharic] | [TOKENS: 24756] |
BBC News, አማርኛ - ዜና ዜና የትግራይ ክልል ጊዜያዊ አስተዳደር፤ የኢትዮጵያ መከላከያ ሠራዊት ወደ ክልሉ እንዲገባ በፌደራል መንግሥት ትዕዛዝ ተላልፏል በሚል የተሰራጨው መረጃ “ፍፁም ሐሰት” ነው አለ። ጊዜያዊ አስተዳደሩ ይህንን መግለጫ ያወጣው መከላከያ ሠራዊት በክልሉ በሚገኙ የሰሜን ዕዝ ካምፖች ውስጥ እንዲሰፍር ከፌደራል መንግሥት ጋር መግባባት ላይ መደረሱን የሚገልጹ መረጃዎች መሰራጨታቸውን ተከትሎ ነው። በባላንጣነት የሚታየዩት የመካከለኛው ምሥራቅ ኃያላን ሳዑዲ አረቢያ እና የተባበሩት አረብ ኤምሬቶች በአፍሪካ ቀንድ ፖለቲካዊ፣ ዲፕሎማሲያዊ እና ወታደራዊ ለውጦች እንዲታዩ ምክንያት እየሆኑ ነው። በተለይ ኢትዮጵያ፣ ኤርትራ፣ ሶማሊያ እና ሱዳን የእነዚህ አገራት ዋነኛ የመፎካከሪያ መድረኮች ሆነዋል። በተጨማሪም ቱርክ በቀጣናው ያላት ሚና እየጎላ ነው። ይህ ፉክክር ለአፍሪካ ቀንድ ምን ይዞ ይመጣል? ግጭቶች በሰው ሕይወት እና አካል እንዲሁም በንብረት እና በመሠረተ ልማቶች ላይ ከሚያደርሱት ጉዳት ባሻገር በሰዎች አካላዊ እና ሥነ ልቦናዊ ጤና እና ደኅንነት ላይ ከባድ ተጽእኖን እንደሚያስከትል ባለሙያዎች ይናገራሉ። እንዲህ ባለው ሁኔታ ውስጥ ስንሆን የሚከሰቱ ሥነ ልቦናዊ ለውጦች ምንድን ናቸው? ለመቋቋምስ ምን ማድረግ እንችላለን? የቢቢሲ አማርኛ ዩቲዩብ ቻናልን ይቀላቀሉ የቢቢሲ አማርኛ ዩቲዩብ ቪድዮዎች ቱርክ ከኦቶማን ግዛተ መንግሥት መውደቅ በኋላ እንደ አገረ መንግሥት ከተመሠረተች 100 ዓመታት አለፋት። የቱርክ አባት በመባል የሚታወቁት አታቱርክ፣ አገራቸውን ከሃይማኖት ገለልተኛ እና ዘመናዊ እሳቤን እንድትላበስ መሠረት ጥለዋል። በዘመናዊው ዓለም መድረክ ጉልህ ተጽዕኖ ያላቸው የአገሪቱ ፕሬዝዳንት ረሲፕ ጣይብ ኤርዶዋን እንደ ኦቶማን ዘመን ሁሉ ቱርክን በዓለም ኃያል ለማድረግ እየጣሩ ነው። ይሳካላቸው ይሆን? ጠቅላይ ሚኒስትር ዐቢይ በሕዝብ ተወካዮች ምክር ቤት ባደረጉት ንግግር መንግሥታቸው ከኤርትራ ጋር ስለገባበት ቅራኔ አጀማመር ከዚህ በፊት ያልተናገሩትን ጉዳይ እማኞችን ጠቅሰው ተናግረዋል። በተጨማሪም በሁለቱ አገራት መካከል ለተፈጠረው መቃቃር ሌላ ምክንያት የሆነው የኢትዮጵያ የባሕር በር የማግኘት ፍላጎት “በባሌም ሆነ በቦሌ” የሚሳካ ነው በማለት በአርግጠኝነት ነው ተናግረዋል። ከቀናት በፊት በአባይ (ናይል) ጉዳይ ኢትዮጵያ እና ግብፅን ለማደራደር ጥያቄ ያቀረቡት የአሜሪካው ፕሬዝዳንት ዶናልድ ትራምፕ፤ ኢትዮጵያ የናይል ወንዝ "ፍሰትን የሚያስቆም ግድብ" ሠርታለች አሉ። በዚህም ምክንያት ግብፅ ከናይል ወንዝ "በቂ ውሃ እንደማታገኝ" የተናገሩት ፕሬዝዳንቱ፤ ይህንን ጉዳይ "መፍታት ይኖርብኛል" ሲሉ ተደምጠዋል። ለመሆኑ ፕሬዝዳንቱ በተለያዩ ወቅቶች በጉዳዩ ዙሪያ ምን ተናገሩ? የኢትዮጵያ አየር መንገድ በቢሾፍቱ አቅራቢያ 'አቡሴራ' በተባለ አካባቢ የሚገነባው አዲስ ዓለም አቀፍ አየር ማረፊያ ቅዳሜ ጥር 2/2018 ዓ.ም. መሠረተ ድንጋይ ተቀምጧል። 12 ነጥብ 5 ቢሊዮን ዶላር የሚወጣበት የአውሮፕላን ማረፊያ የመጀመሪያ ምዕራፍ ግንባታ ሲጠናቀቅ፣ በዓመት 60 ሚሊዮን መንገደኞችን ማስተናገድ ይችላል ተብሎ ይጠበቃል። 3,500 ሔክታር መሬት የሚሸፍነው ይህ ግዙፍ የአውሮፕላን ማረፊያ አውሮፕላን ማረፊያ በሙሉ አቅሙ ወደ ሥራ ሲገባ በዓመት 110 ሚሊዮን መንገደኞችን ያስተናግዳል ተብሏል። አሜሪካ ከፍተኛ የነዳጅ ክምችት ቢኖረትም በሌሎች አገራት ውስጥ ባሉ የነዳጅ ክምችቶች ላይ ዘወትር ትረኩት ደታርጋለች። ለዚህ በበርካታ አገሮች ጦርነት ውስጥ በቀጥታ እና በተዘዋዋሪ ስትሳተፍ ቆይታለች። በቅርቡም በቬንዙዌላ ሰጥ ቀጥተኛ ወታደራዊ እርምጃ በመውሰድ መሪዋን ይዛ ወስዳለች። ከዚያም በኋላ ፕሬዝዳንት ትራምፕ በቬንዙዌላ የነዳጅ ሀብት ላይ አሜሪካ ያላትን ፍላጎት በይፋ በተደጋጋሚ ሲናገሩ ተደምጠዋል። የኤርትራው ፕሬዝዳንት ኢሳያስ አፈወርቂ ከአገራቸው ቴሌቪዥን ጋር ባደረጉት ቃለ ምልልስ በአፍሪካ ቀንድ አካባቢ ላለው ውጥረት እና ለተፈጠረው ችግር ከጀርባ ሆኖ የሚገፋ ወገን እንዳለ ተናግረዋል። ጦርነት ሊከሰት የሚችልበት እድል እንዳለ የጠቆሙት ፕሬዝዳንቱ ለዚህም ዝግጁ መሆናቸውን በመግለጽ “ጦርነትን በተግባር እናውቀዋለን” ብለዋል። አሜሪካ ማዱሮ ላይ በዕጽ እና በጦር መሳሪያ ዝውውር ብትከሳቸውም ዋነኛ ትኩረቷ ቬንዙዌላ ባላት የተትረፈተፈ የነዳጅ ሀብት ላይ እንደሆነ ይነገራል። ትራምፕም ይህንን ሳይደብቁ ተገቢው ሽግግር እስኪደረግ ድረስ ቬንዙዌላ በአሜሪካ እንደምትመራ እና ኩባንያዎቿ የነዳጅ ምርት ሥራውን እንደሚረከቡ በግልጽ ተናግረዋል። እስራኤል ለሶማሊላንድ አገርነት ይፋ ዕውቅና መስጠቷን ተከትሎ የሶማሊላንድ እና የሶማሊያ እንዲሁም የቀጣናው ዕጣ ፈንታ ምን ሊሆን ይችላል የሚለው ጥያቄ መነሳት ጀምሯል። የሙላቱ የሙዚቃ ጉዞ አስርት ዓመታትን ያስቆጠረ ነው። የተወለደው በደቡብ ምዕራብ ኢትዮጵያዋ ከተማ ጅማ ነው። ነገር ግን ለዩናይትድ ኪንግደም ትልቅ ቦታ አለው። ትምህርቱን በዌልስ ነው የተከታተለው። ሙላቱ ወደ ሙዚቃው ዓለም እንዴት ገባ? ከዚህ በኋላስ መድረክ ላይ እናየው ይሆን? አነጋጋሪ ጉዳይ አቶ ነብዩ ማኅበራዊ ሚዲያ በታዳጊ ልጆች ላይ የሚያስከትለውን ተጽእኖ በተመለከተ መራራ የሆነ ልምድ ነው ያላቸው። የ13 ዓመቷ ልጃቸው ፊዮሬላ ላይ ምንም ዓይነት ምልክት ሳይመለከቱ በእኩዮቿ ጫና እና በማኅበራዊ ሚዲያ ግፊት ባላሰቡት ጊዜ ልጃቸው እራሷን አጥፍታ አግኝተዋታል። በአማራ ክልል የደቡብ ጎንደር ዞን መቀመጫ በሆነችው ደብረ ታቦር ከተማ በፋኖ ታጣቂዎች እና በመንግሥት ኃይሎች መካከል በተካሄደ ግጭት የሕይወት እና የንብረት ጉዳት መድረሱን ነዋሪዎች ተናገሩ። የከተማዋ አስተዳደር ባወጣው መግለጫ ታጣቂዎች በመንግሥት እና በግለሰብ ንብረት ላይ ውድመት እና ዘረፋ መፈፀማቸውን አመልክቷል። የኢትዮጵያ ብሔራዊ ባንክ፤ የውጭ ምንዛሬ ሒሳብ (አካውንት) ያላቸው ደንበኞች የቪዛ እና የበረራ ትኬት ማቅረብ ሳይጠበቅባቸው ለዓለም አቀፍ የበይነ መረብ (ኦንላይን) ግብይቶች የሚውል የውጭ ምንዛሬ ክፍያ ካርድ እንዲያገኙ ፈቀደ። በተጨማሪም በአዲሱ መመሪያ መሠረት፤ ኢትዮጵያውያን "ከአገር ውጭ ለሚኖሩ ዘመዶቻቸው እገዛ" የሚውል እስከ 3,000 ዶላር የሚደርስ ገንዘብ ወደ ሌላ አገር መላክ ይችላሉ። ኢትዮጵያ እና ኤርትራ ሲለዋወጡት የቆዩት 'የቃላት ጦርነት' ከመቼው በበለጠ አሁን ላይ መካረሩን የሚጠቁሙ ምልክቶች እየተስተዋሉ ይገኛሉ። ኢትዮጵያ "ኤርትራ ወደ ኢትዮጵያ ግዛት በመግባት የተወሰኑ አካባቢዎችን ተቆጣጥራለች" እንዲሁም "ታጣቂዎችን ትደግፋለች" ብላ ያቀረበችውን ክስ ኤርትራ "ሐሰተኛ እና የፈጠራ" ስትል አጣጥላለች። ለመሆኑ በኢትዮጵያ እና በኤርትራ መካከል ያለው 'የቃላት ጦርነት' አሁን ላይ እየተካረረ የመጣው በምን ምክንያት ነው? ወዴትስ ሊያመራ ይችላል? የኤርትራ ሠራዊት ተቆጣጣሯቸዋል ካላቸው የኢትዮጵያ ሉዓላዊ ግዛቶች እንዲወጣ እና ለድርድር ዝግጁ እንዲሆን የኢትዮጵያ የውጭ ጉዳይ ሚኒስቴር ጠየቀ። ሚኒስቴሩ ቅዳሜ ጥር 30/2018 ዓ.ም. ለኤርትራው የውጭ ጉዳይ ሚኒስቴር በጻፈው ደብዳቤ ላይ ኤርትራ የኢትዮጵያን ሉዓላዊ ድንበር በመጣስ በሰሜን ምሥራቅ አቅጣጫ የሚገኙ አካባቢዎችን ተቆጣጥራ እንደምትገኝ ገልጿል። ኤርትራ የኢትዮጵያ ያቀረበችባትን ክስ "ትርጉም የለሽ" በማለት በዚህ ውዝግብ ውስጥ ለመግባት ፍላጎት እንደሌላት በማስታወቂያ ሚኒስቴሯ በኩል ባወጣችው አጭር መግለጫ አስታወቀች። የኢትዮጵያ የውጭ ጉዳይ ሚኒስቴር ኤርትራ ወደ ኢትዮጵያ ግዛት በመግባት የተወሰኑ አካባቢዎችን ተቆጣጥራለች እንዲሁም ታጣቂዎችን ትደግፋለች ሲል ቅዳሜ ዕለት በጻፈው ደብዳቤ ክስ እና የድርድር ጥያቄ አቅርቦ ነበር። ጠቅላይ ሚኒስትር ዐቢይ አሕመድ በትግራይ ጦርነት ወቅት የኤርትራ ሠራዊት የሚፈጽማቸውን ግፎች እንዲያቆም የሚጠይቅ መልዕክት ወደ አሥመራ እንደላኩ የገለጹበትን የፓርላማ ንግግር የቀድሞው የውጭ ጉዳይ ሚኒስትር ገዱ አንዳርጋቸው አስተባበሉ። በማኅበራዊ ሚዲያ ላይ የተሠራጨው ባለሦስት ገጽ ደብዳቤም የራሳቸው መሆኑን ለቢቢሲ አረጋግጠዋል። የትግራይ ክልል ዋና ከተማ መቀለን ጨምሮ በተለያዩ የክልሉ አካባቢዎች ከፍተኛ የነዳጅ አቅርቦት እጥረት ካጋጠመ ወራት መቆጠራቸውን ባለሥልጣናት እና የነዳጅ ማደያዎች ይናገራሉ። ነገር ግን ነዳጅ ከእጥፍ በላይ በሆነ ዋጋ በጥቁር ገበያ እየተሸጠ ይገኛል። ይህ ነዳጅ ከየት ነው የሚመጣ? የሚለው የብዙዎች ጥያቄ ነው። የኢንተርኔት ዳታን በመቆጠብ የቢቢሲ አማርኛ ድረ ገጽን በቀላሉ ማንበብ ይፈልጋሉ? የቢቢሲ አማርኛን ድረ ገጽ በቀላሉ በመክፈት ዜና እና ታሪኮችን በጽሑፍ ብቻ ያንብቡ! ከየፈርጁ የጄፍሪ ኤፕስቲን ሰነዶች ይፋ መሆናቸውን ተከትሎ በመላው ዓለም የሚገኙ ታዋቂ ግለሰቦች እና ተቋማት በአሉታዊ መልኩ ስማቸው እየተነሳ ነው። በዚህም ምክንያት ፖለቲከኞች እና ታዋቂ ሰዎች ዝናቸው ጥያቄ ውስጥ ገብቶ አንዳንዶቹም ከሥልጣን እና ከኃላፊነታቸው እየለቀቁ ነው። ለመሆኑ ለዚህ ሁሉ ቀውስ ምክንያት የሆነው ጄፍሪ ኤፕስቲን ማን ነው? ከ30 በላይ ኢትዮጵያውያን እና ኤርትራውያን በሊቢያ በአይኤስአይኤስ ከተገደሉ 11 ዓመት ሆኖታል። ያንን አሰቃቂ ግድያ ተዓምር በሚመስል መልኩ የተረፈው ዳንኤል አብርሃ ከአዲስ አበባ -ሊቢያ ከዚያም ዩናይትድ ኪንግደም ያለፈበትን ሁኔታ ስለሚተርክበት መጽሐፉ ከቢቢሲ ጋር ቆይታ አድርጓል። ቢቢሲ አማርኛን በዋትስአፕ ላይ ያግኙ አጃኢብ! ስኪዞፍሪኒያ የሚባል የአእምሮ ጤና መታወክ እንዴት ያለ ነው? ፕሮፌሰር ሰለሞን ተፈራ ፒኤችዲ ሠርተውበታል። ለ20 ዓመት ተመራምረውበታል። በስኪዞፍሬኒያ የሚሰቃዩ በርካታ ታካሚዎች ነበሯቸው፤ አሏቸው። ፕሮፌሰሩ ነገርን በአጭር የማስረዳት ፀጋ አላቸው። ይህን የአእምሮ በሽታ እንዲህ ይገልጹታል። "ነገሩ የማርያምን ብቅል ሲፈጩ መዋል ነው" ይሉታል። ምን ማለታቸው ነው? የእስራኤል እና የኢራን ግጭት አይኤስ በመካከለኛው ምሥራቅ የነበረውን ድጋፍ እያጣ ከመጣ ወዲህ በአፍሪካ የተለያዩ አገራት ውስጥ እግሩን እያስገባ ነው። በምዕራብ አፍሪካ አገራት 12ሺህ የሚደርሱ ተዋጊዎች ያሉት አይኤስ በጎረቤት አገር ሶማሊያ ውስጥም ይንቀሳቀሳል። በዴሞክራቲክ ኮንጎ፣ በሞዛምቢክ እና ናይጄሪያም ቡድኑ እንቅስቃሴዎችን ያደርጋል። ከከተፎካካሪዎቿ ዘግይታ ወደ ቀይ ባሕር ወታደራዊ ኃይሏን ለማስፈር የተቃረበችው ሩሲያ በወሳኙ የባሕር መተላለፊያ ላይ ላይ አዲስ ኃይል ትሆናለች። በአፍሪካ የመጀመሪያዋን የባሕር ኃይል ጦር ሰፈር ለመመሥረት ስትጥር የቆየችው ሩሲያ አሁን ፍላጎቷን ዕውን ለማድረግ ተቃርባለች። ይህ የሩሲያ ኃይል ወደ ቀጣናው መምጣት ምን ይዞ ይመጣል? አዲሱ የሶሪያ ፕሬዝዳንት አህመድ አል ሻራ በአግባቡ የተተኮሰ ሱፍ ለብሰው፣ ፀጉራቸውን ወደ ኋላ አበጥረው ወደ ዓለም አቀፍ መድረክ ሲመጡ ወሳኝ ምዕራፍ ላይ እንዳሉ ያስታውቃል። አሁን የትግል ልብሳቸውን ትተው፣ አዲስ ስም ይዘው በዓለም ላይ ካሉት በጣም ኃያላን መሪዎች ጋር ይጨባበጣሉ። በመካከለኛው ምሥራቅ ኃያል እና ብቸኛዋ የአይሁዶች አገር የሆነችው እስራኤል በዙሪያዋ ካሉ አገራት ሁሉ በዕድሜ ትንሿ ናት። ከሁለተኛው የዓለም ጦርነት ማብቃት በኋላ በአውሮፓውያኑ 1948 የእስራኤል እንደ አገር መመሥረት በአካባቢው አሁን ድረስ እየተካሄደ ያለ ለውጥን አስከትሏል። ለመሆኑ በየዕለቱ የዓለም መገናኛ ብዙኃን ስሟን ሳያነሱ የማያልፏት ይህች አገር እንዴት ተመሠረተች? በኳታር ዋና ከተማ ዶሃ ውስጥ እስራኤል ጥቃት የፈጸመችው በርካታ የሐማስ ፖለቲካ ቢሮ ባለሥልጣናት በሚኖሩበት የመኖሪያ አካባቢ ላይ ነው። በዚህ ጥቃት አስከ ስምንት የሚደርሱ ፍንዳታዎችን እና ከፍተኛ ጭስ በዶሃ ሰሜናዊ ክፍል ላይ ማየታቸውን ዓይን እማኞች ተናግረዋል። ከጥቃቱ ደቂቃዎች በኋላ ደግሞ እስራኤል ለተፈጸመው ጥቃት በይፋ ኃላፊነቱን ወስዳለች። የጠቅላይ ሚኒስትር ዴቪድ ቤን ጉርዮን መንግሥት በዓለም ታዋቂ የሆነው ሳይንቲስት አልበርት አንስታይን ለእስራኤል ፕሬዝዳንትነት እንዲሆን ካጩት በኋላ በዩናይትድ ስቴትስ የእስራኤል አምባሳደር አባ ኢባን አንስታይንን እንዲያነጋግሩ ተደረገ። የተመለሱ ጥያቄዎች ኢትዮጵያ በአፍሪካ ግዙፍ የሚባለው የአውሮፕላን ማረፊያ ከአዲስ አበባ 40 ኪሎ ሜትር ያህል በሚርቅ ቦታ ላይ ለመገንባት ሥራ ጀምራለች። ይህ አውሮፕላን ማረፊያ መሃል አዲስ አበባ ውስጥ ከሚገኘው በብዙ እጥፍ የሚበልጡ መንገደኞችን ያስተናግዳል ተብሏል። ይህንን ዓለም አቀፍ አውሮፕላን ማረፊያን ለምን አሁን መገንባት አስፈለገ? ቦታውስ ለምን ተመረጠ? ሌላ ዕይታ ቢቢሲ አማርኛን በማኅበራዊ ሚዲያዎች ይከታተሉ ከማኅደራችን ከሌሊት ወፎች የሚነሳው የማርበርግ ቫይረስ ለመጀመሪያ ጊዜ ኢትዮጵያ ውስጥ ከተገኘ አንድ ወር ሊደፍን ጥቂት ቀናት ቀርተውታል። የጤና ሚኒስቴር፤ እንዴት በሽታው በጂንካ ከተማ ሊነሳ እንደቻለ እስካሁን አልገለጸም። ለመሆኑ በጂንካ የመጀመሪያው ታማሚ ማን ነበር? የበሽታው መነሻ የሆኑት የሌሊት ወፎችን እና የመጀመሪያውን ታማሚስ ምን አገናኛቸው? ቢቢሲ ባደረጋቸው በርካታ ቃለ መጠይቆች ለእነዚህ ጥያቄዎች መልስ የሚሆኑ መረጃዎች አሰባስቧል። ከሁለት ዓመት በላይ የሆነው በአማራ ክልል በመንግሥት ኃይሎች እና በፋኖ ታጣቂዎች መካከል በሚካሄደው ግጭት ሰላማዊ ሰዎች ከባድ ዋጋ እየከፈሉ ነው። ግጭቱ በሰው ሕይወት እና በንብረት ላይ ከሚያደርሰው ጉዳት ባሻገር በሴቶች ላይ የሚፈጸመው ጥቃት እና ሰቆቃ ትኩረትን አላገኘም። ቢቢሲ ለወራት ባካሄደው ክትትል ከፍተኛ ቁጥር ያላቸው ሴቶች የመደፈር እና የወሲባዊ ጥቃት ሰለባ መሆናቸውን መረጃ አግኝቷል። ሰሞኑን አረቄ ብሔራዊ አጀንዳ ሆና የማኅበራዊ ሚዲያ አናት ላይ ወጥታ ነበር። ጠጅ ቢሆን አናት ላይ አይወጣም፤ ወደ ጉልበት ነው የሚወርደው። ጠጅ ለመጨረሻ ጊዜ ጉዳይ ሆኖ የሥነ መንግሥቱ አናት ላይ ወጥቶ፣ 'ብሔራዊ አጀንዳ' የነበረው በዐፄ ዮሐንስ 4ኛ ዘመን ነው። ድሮ ጠጅ በብርሌ ይቀርብ ነበር። ከዚያ በብርሌ ይጠጣ ጀመር። ጠጅ ግን እንደ አረቄ ማንም ‘ተራ ዜጋ’ ብድግ አድርጎ አያንደቀድቀውም።ብርሌ ውስጥ የመደብ ታሪክ አለ። የዘመን ታሪክ አለ። የጭሰኛና የመሳፍንት ታሪክ አለ። ያልተሳከረ። ከተከፈተ በዚህ ወር 20 ዓመት የሆነው የቪዲዮ ማጋሪያው የማኅበራዊ ሚዲያ መድረክ ዩቲዩብ ለብዙዎች የተለያዩ አጋጣሚዎችን ፈጥሯል። ከእነዚህም መካከል አቤል ብርሃኑ አንዱ ነው፤ በዩቲዩብ አማካኝነት 62 ሀገራትን ዟሯል። ያልረገጠው አህጉር አንታርቲካን ብቻ ነው። ሁሉንም የአሜሪካ ግዛቶች አካሏል። ከዘጠኝ ዓመታት በላይ ዩቲዩብ ላይ የቆየው አቤል ከዩቲዩብ ምንያህል ያገኛል? ከእንግሊዝ ታላላቅ ቡድኖች መካከል አንዱ የሆነው የአርሰናል የልብ ደጋፊ የሆኑት ወ/ሮ እቴቱ ማሞ የ65 ዓመት እናት ናቸው። ላለፉት 20 ዓመታት አርሰናልን ያለማቋረጥ ደግፈዋል። እንደብዙዎቹ የቡድኑ ደጋፊዎች ዘንድሮ አርሰናል ዋንጫ ባለማንሳቱ ቢከፉም ድጋፋቸው ግን እንደሚቀጥል ይናገራሉ። ከሁሉም ተጫዋቾች ይልቅ እሳቸው ገብረኢየሱ ለሚሉት ለጋብሬል ጀሱስ እና ለቡካዮ ሳካ የተለየ ፍቅር አላቸው። ተፈሪ መኰንን ትምህርት ቤት ታሪክ የዘመናዊት ኢትዮጵያ ንድፍ እንደማለት ነው። ላለፉት 100 ዓመታት ታላላቅ ሰዎችን ወልዷል። ለመኾኑ ይህ ተማሪ ቤት እነማንን አፈራ? ይህ ሐተታ በ100 ዓመት ታሪኩ ውስጥ በአንድም ሆነ በሌላ አሻራቸውን ያሳረፉትን ግለሰቦች ያወሳል፤ ከካናዳዊያን ጄስዊቶች እስከ ከንቲባ አዳነች አበቤ፤ ከቱጃሩ ራስ ኃይሉ ተክለሃይማኖት እስከ ቢሊየነሩ ሼክ አላሙዲን፣ ከሐኪም ወርቅነህ እሸቴ እስከ ፕሮፌሰር አስራት ወልደየስ። ባለፉት ዓመታት በኢትዮጵያ ውስጥ በሕዝብ እና በጭነት ተሽከርካሪዎች ላይ የሚፈጸሙ ጥቃቶች እና እገታዎች ተበራክተዋል። ይህ ክስተት ደግሞ ከአዲስ አበባ በቅርብ ርቀት ላይ በሚገኘው የሰሜን ሸዋ አካባቢ በተደጋጋሚ ሲከሰት ቆይቷል። በተለይ አሊ ዶሮ የሚባለው ቦታ የጥቃት እና የእገታ ማዕከል ሆኗል እየተባለ ነው። ቢቢሲ በተለይ ይህ ቦታ ለምን የመንገደኞች እና የአሽከርካሪዎች ‘የሞት ቀጣና’ ሆነ? በሚል ለወራት የዘለቀ ምርመራ አድርጓል። ባለፉት ዓመታት በአዲስ አበባ በቅርስነት ተመዝግበው የነበሩ እንዲሁም ሌሎችም ታሪካዊ ሕንጻዎች፣ ቤቶች እና አካባቢዎች እንደ ዋዛ አፈር ለብሰዋል። በዚህ ዘገባ፣ አንድን ሕንጻ ወይም ቤት ቅርስ አልያም ታሪካዊ ሥፍራ ለማለት መስፈርቱ ምንድን ነው? የታሪካዊ ሕንጻዎች እና ቤቶች መፍረስ በታሪክ፣ በነዋሪዎች ሥነ ልቦና፣ በኪነ ሕንጻ እና በማኅበራዊ መስተጋብር ላይ ምን ጫና ያሳድራል? ቅርሶች እና ታሪካዊ ሥፍራዎችን እንዴት ጠብቆ ማደስ፣ ማዘመን፣ መልሰው አገልግሎት ላይ እንዲውሉ ማድረግ እንዲሁም ከሌሎች ግንባታዎች ጋር ጎን ለጎን እንዲሄዱ ማድረግ ይቻላል? የሚሉት እና ተያያዥ ነጥቦች ተዳስሰዋል። ግንቦት 2015 ዓ.ም. የተፈጸመው የፀጋ በላቸው ጠለፋ ብዙዎችን ያስቆጣ፣ መነጋገሪያም የሆነ የወንጀል ድርጊት ነበር። ፀጋ፤ ከአንድ ዓመት ከመንፈቅ በፊት ተጠልፋ ያሳለፈቻቸውን ቀናት በተመለከተ ለመጀመሪያ ጊዜ ለቢቢሲ በዝርዝር ተናግራለች። ከቢቢሲ ጋር በነበራት ዘለግ ያለ ቆይታ፤ ስለተጠለፈችበት መንገድ እና ስላሳለፈቻቸው ዘጠኝ ቀናት በዝርዝር ተርካለች። ከጠለፋው ነጻ የወጣችበት መንገድም አስቀድሞ በክልሉ መንግሥት ከተገለጸው የተለየ እንደሆነ ለመጀመሪያ ጊዜ ተናግራለች። ለምን ከኢትዮጵያ እንደተሰደደች እና ስለ ቀጣይ ዕቅዶቿም እንዲሁ አጋርታለች። ባለፉት ዓመታት በተለያዩ የኢትዮጵያ ክፍሎች ውስጥ የሚካሄዱ ግጭቶች እና በታጣቂዎች የሚፈጸሙ እገታዎች ዜጎች በፈለጉት ጊዜ እና መንገድ በአገሪቱ ውስጥ እንዳይንቀሳቀሱ ብርቱ ስጋት ፈጥሮባቸዋል። በተለይ ደግሞ የመንገደኞች ማጓጓዣ ተሽከርካሪዎችን በማድረግ የሚፈጸሙት ያላባሩ እገታዎች በዋነኞቹ የአገሪቱ ክልሎች ውስጥ እንቅስቃሴን አዳጋች አድርገውታል። በዚህም ሳቢያ አቅሙ ያላቸው እና የግድ የሆነባቸው ሰዎች ፊታዎችን ወደ አየር ትራንስፖርት አዙረዋል። ፓላንዳዊቷ ዶ/ር ኤቫን የአማርኛ ቋንቋን ከ40 ዓመት በፊት ተምራ በዲግሪ ከተመረቀች በኋላ ከሃያ ዓመታት በላይ በማስተማር በርካቶችን በአማርኛ ቋንቋ ዲግሪ አስይዛለች። የምታስተምርበት ዩኒቨርስቲ ወደ 80 ዓመታት ለሚጠጋ ጊዜ አማርኛን እያስተማረ ነው። አማርኛ ቋንቋን “ፏፏቴ ነው” የምትለው ዶ/ር ኤቫን በአገሯ በአማርኛ አስተርጓሚነት ትሠራለች። ለመሆኑ እንዴት ከአማርኛ ጋር ተዋወቀች? በኢትዮጵያ ፖለቲካ ውስጥ ጉልህ ስፍራ ካላቸው ፖለቲከኞች መካከል ጃዋር መሃመድ አንዱ ነው። ለዓመታት ያህል በኦሮሚያ የነበሩ ትግሎችን እንዲሁም ተቃውሞዎችን ከጀርባም ሆነ ከፊት ሆኖ በማስተባበር እና በመምራት ግንባር ቀደም ነው።የኦሮሞ ፌደራሊስት ኮንግረስ ፓርቲ አመራሩ ጃዋር ከትውልድ አንስቶ በፖለቲካ ህይወቱ የተጓዘበትን ህይወቱ ላይ የሚያጠነጥን 'አልጸጸትም' የተሰኘውን መጽሐፍ ታህሳስ 10/ 2017 ዓ.ም በኬንያዋ መዲና ናይሮቢ ያስመርቃል። ጃዋር መሃመድ በአገራዊ ጉዳዮች ዙሪያ፣ በቀጣይ የኢትዮጵያ ፖለቲካ ውስጥ ስለሚኖረው ሚና፣ አገሪቱ ስላለችበት ሁኔታ ከቢቢሲ አማርኛ ጋር ቆይታ አድርጓል። በስፋት የተነበቡ © 2026 BBC. ቢቢሲ ከሌሎች ድረ-ገጾች ለሚመጡ መረጃዎች ሀላፊነት አይወስድም. ስለ ውጪ ሊንኮች ያለን አቀራረብ |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Deism] | [TOKENS: 6173] |
Contents Deism Deism (/ˈdiːɪzəm/ DEE-iz-əm or /ˈdeɪ.ɪzəm/ DAY-iz-əm; derived from the Latin term deus, meaning "god") is the philosophical position and rationalistic theology that rejects prophecies, revelations, and religious texts as legitimate or reliable sources of divine knowledge, and instead asserts that empirical reason and observation of the natural world are exclusively logical, reliable, and sufficient to determine the existence of a Supreme Being as the creator of the universe. Unlike classical theism, Deism is the belief in the existence of a creator God who simply does not intervene anymore after creating the universe, solely based on rational thought and without any reliance on revealed religions or religious authorities. Therefore, Deism emphasizes the concept of natural theology—that is, God's existence is revealed through nature itself. Since the 17th century and during the Age of Enlightenment, especially in 18th-century England, France, and North America, various Western philosophers and theologians formulated a critical rejection of the several religious texts belonging to the many organized religions, and began to appeal only to truths that they felt could be established by reason as the exclusive source of divine knowledge. Such philosophers and theologians were called "Deists", and the philosophical/theological position they advocated is called "Deism". Deism as a distinct philosophical and intellectual movement declined toward the end of the 18th century but had a revival in the early 19th century. Some of its tenets continued as part of other intellectual and spiritual movements, like Unitarianism, and Deism continues to have advocates today, including with modern variants such as Christian deism and pandeism. Early developments Deistical thinking has existed since ancient times; the roots of Deism can be traced back to the philosophical tradition of Ancient Greece. The 3rd-century Christian theologian and philosopher Clement of Alexandria explicitly mentioned persons who believed that God was not involved in human affairs, and therefore led what he considered a licentious life. However, Deism did not develop as a religio-philosophical movement until after the Scientific Revolution, which began in the mid-16th century in early modern Europe. In the history of Islam, one of the earliest systematic schools of Islamic theology to develop was the Muʿtazila in the mid-8th century CE. Muʿtazilite theologians emphasized the use of reason and rational thought, positing that the injunctions of God are accessible through rational thought and inquiry, and affirmed that the Quran was created (makhlūq) rather than co-eternal with God, an affirmation that would develop into one of the most contentious questions in the history of Islamic theology. In the 9th–10th century CE, the Ashʿarī school developed as a response to the Muʿtazila, founded by the 10th-century Muslim scholar and theologian Abū al-Ḥasan al-Ashʿarī. Ashʿarītes still taught the use of reason in understanding the Quran, but denied the possibility to deduce moral truths by reasoning. This position was opposed by the Māturīdī school; according to its founder, the 10th-century Muslim scholar and theologian Abū Manṣūr al-Māturīdī, human reason is supposed to acknowledge the existence of a creator deity (bāriʾ) solely based on rational thought and independently from divine revelation. He shared this conviction with his teacher and predecessor Abū Ḥanīfa al-Nuʿmān (8th century CE), whereas al-Ashʿarī never held such a view. According to the Afghan-American philosopher Sayed Hassan Hussaini, the early schools of Islamic theology and theological beliefs among classical Muslim philosophers are characterized by "a rich color of Deism with a slight disposition toward theism". The terms deism and theism are both derived from words meaning "god": the Latin term deus and the Ancient Greek term theós (θεός), respectively. The word déiste first appeared in French in 1563 in a theological treatise written by the Swiss Calvinist theologian named Pierre Viret, but Deism was generally unknown in the Kingdom of France until the 1690s when Pierre Bayle published his famous Dictionnaire Historique et Critique, which contained an article on Viret. In English, the words deist and theist were originally synonymous, but by the 17th century the terms started to diverge in meaning. The term deist with its current meaning first appears in English in Robert Burton's The Anatomy of Melancholy (1621). The first major statement of Deism in English literature is Lord Herbert of Cherbury's book De Veritate (1624). Lord Herbert, like his contemporary Descartes, searched for the foundations of knowledge. The first two-thirds of his book De Veritate (On Truth, as It Is Distinguished from Revelation, the Probable, the Possible, and the False) are devoted to an exposition of Herbert's theory of knowledge. Herbert distinguished truths from experience and distinguished reasoning about experience from innate and revealed truths. Innate truths are imprinted on our minds, as evidenced by their universal acceptance. Herbert referred to universally accepted truths as notitiae communes—Common Notions. Herbert believed there were five Common Notions that unify all religious beliefs. Herbert himself had relatively few followers, and it was not until the 1680s that Herbert found a true successor in Charles Blount (1654 – 1693). The appearance of John Locke's Essay Concerning Human Understanding (1690) marks an important turning-point and new phase in the history of English Deism. Lord Herbert's epistemology was based on the idea of "common notions" (or innate ideas). Locke's Essay was an attack on the foundation of innate ideas. After Locke, deists could no longer appeal to innate ideas as Herbert had done. Instead, deists were forced to turn to arguments based on experience and nature. Under the influence of Newton, they turned to the argument from design as the principal argument for the existence of God. Peter Gay identifies John Toland's Christianity Not Mysterious (1696), and the "vehement response" it provoked, as the beginning of post-Lockian Deism. Among the notable figures, Gay describes Toland and Matthew Tindal as the best known; however, Gay considered them to be talented publicists rather than philosophers or scholars. He regards Conyers Middleton and Anthony Collins as contributing more to the substance of debate, in contrast with fringe writers such as Thomas Chubb and Thomas Woolston. Other English Deists prominent during the period include William Wollaston, Charles Blount, Henry St John, 1st Viscount Bolingbroke, and, in the latter part, Peter Annet, Thomas Chubb, and Thomas Morgan. Anthony Ashley-Cooper, 3rd Earl of Shaftesbury was also influential; though not presenting himself as a Deist, he shared many of the deists' key attitudes and is now usually regarded as a Deist. Especially noteworthy is Matthew Tindal's Christianity as Old as the Creation (1730), which became, very soon after its publication, the focal center of the Deist controversy. Because almost every argument, quotation, and issue raised for decades can be found here, the work is often termed "the Deist's Bible". Following Locke's successful attack on innate ideas, Tindal's "Bible" redefined the foundation of Deist epistemology as knowledge based on experience or human reason. This effectively widened the gap between traditional Christians and what he called "Christian Deists", since this new foundation required that "revealed" truth be validated through human reason. Enlightenment Deism Enlightenment Deism consisted of two philosophical assertions: (1) reason, along with features of the natural world, is a valid source of religious knowledge, and (2) revelation is not a valid source of religious knowledge. Different Deist philosophers expanded on these two assertions to create what Leslie Stephen later termed the "constructive" and "critical" aspects of Deism. "Constructive" assertions—assertions that deist writers felt were justified by appeals to reason and features of the natural world (or perhaps were intuitively obvious or common notions)—included: "Critical" assertions—assertions that followed from the denial of revelation as a valid source of religious knowledge—were much more numerous, and included: A central premise of Deism was that the organized religions of their day were corruptions of an original religion that was pure, natural, simple, and rational. Humanity lost this original religion when it was subsequently corrupted by priests who manipulated it for personal gain and for the class interests of the priesthood, and encrusted it with superstitions and "mysteries"—irrational theological doctrines. Deists referred to this manipulation of religious doctrine as "priestcraft", a derogatory term. For Deists, this corruption of natural religion was designed to keep laypeople baffled by "mysteries" and dependent on the priesthood for information about the requirements for salvation. This gave the priesthood a great deal of power, which the Deists believed the priesthood worked to maintain and increase. Deists saw it as their mission to strip away "priestcraft" and "mysteries". Matthew Tindal, perhaps the most prominent Deist writer in early modern Europe, claimed that this was the proper, original role of the Christian Church. One implication of this premise was that current-day primitive societies, or societies that existed in the distant past, should have religious beliefs less infused with superstitions and closer to those of natural theology. This position became less and less plausible as Enlightenment philosophers such as David Hume began studying the natural history of religion and suggested that the origin of religion was not in reason but in emotions, such as the fear of the unknown. Different Deists had different beliefs about the immortality of the soul, about the existence of Hell and damnation to punish the wicked, and the existence of Heaven to reward the virtuous. Anthony Collins, Bolingbroke, Thomas Chubb, and Peter Annet were materialists and either denied or doubted the immortality of the soul. Benjamin Franklin believed in reincarnation or resurrection. Lord Herbert of Cherbury and William Wollaston held that souls exist, survive death, and in the afterlife are rewarded or punished by God for their behavior in life. Thomas Paine believed in the "probability" of the immortality of the soul. The most natural position for Deists was to reject all forms of supernaturalism, including the miracle stories in the Bible. The problem was that the rejection of miracles also seemed to entail the rejection of divine providence (that is, God taking a hand in human affairs), something that many Deists were inclined to accept. Those who believed in a watch-maker God rejected the possibility of miracles and divine providence. They believed that God, after establishing natural laws and setting the cosmos in motion, stepped away. He did not need to keep tinkering with his creation, and the suggestion that he did was insulting. Others, however, firmly believed in divine providence, and so, were reluctantly forced to accept at least the possibility of miracles. God was, after all, all-powerful and could do whatever he wanted including temporarily suspending his own natural laws. Enlightenment philosophers under the influence of Newtonian science tended to view the universe as a vast machine, created and set in motion by a creator being, that continues to operate according to natural law without any divine intervention. This view naturally led to what was then called "necessitarianism" (the modern term is "determinism"): the view that everything in the universe—including human behavior—is completely, causally determined by antecedent circumstances and natural law. (See, for example, La Mettrie's L'Homme machine.) As a consequence, debates about freedom versus "necessity" were a regular feature of Enlightenment religious and philosophical discussions. Reflecting the intellectual climate of the time, there were differences among Deists about freedom and determinism. Some, such as Anthony Collins, were actually necessitarians. Views differ on whether David Hume was a Deist, an atheist, or something else. Like the Deists, Hume rejected revelation, and his famous essay On Miracles provided a powerful argument against belief in miracles. On the other hand, he did not believe that an appeal to Reason could provide any justification for religion. In the essay Natural History of Religion (1757), he contended that polytheism, not monotheism, was "the first and most ancient religion of mankind" and that the psychological basis of religion is not reason, but fear of the unknown. In Waring's words: The clear reasonableness of natural religion disappeared before a semi-historical look at what can be known about uncivilized man— "a barbarous, necessitous animal," as Hume termed him. Natural religion, if by that term one means the actual religious beliefs and practices of uncivilized peoples, was seen to be a fabric of superstitions. Primitive man was no unspoiled philosopher, clearly seeing the truth of one God. And the history of religion was not, as the deists had implied, retrograde; the widespread phenomenon of superstition was caused less by priestly malice than by man's unreason as he confronted his experience. The Thirteen Colonies of North America – which became the United States of America after the American Revolution in 1776 – were part of the British Empire, and Americans, as British subjects, were influenced by and participated in the intellectual life of the Kingdom of Great Britain. English Deism was an important influence on the thinking of Thomas Jefferson and the principles of religious freedom asserted in the First Amendment to the United States Constitution. Other Founding Fathers who were influenced to various degrees by Deism were Ethan Allen, Benjamin Franklin, Cornelius Harnett, Gouverneur Morris, Hugh Williamson, James Madison, John Adams, and possibly Alexander Hamilton. In the United States, there is a great deal of controversy over whether the Founding Fathers were Christians, Deists, or something in between. Particularly heated is the debate over the beliefs of Benjamin Franklin, Thomas Jefferson, John Adams, and George Washington. In his Autobiography, Franklin wrote that as a young man "Some books against Deism fell into my hands; they were said to be the substance of sermons preached at Boyle's lectures. It happened that they wrought an effect on me quite contrary to what was intended by them; for the arguments of the Deists, which were quoted to be refuted, appeared to me much stronger than the refutations; in short, I soon became a thorough Deist." Like some other Deists, Franklin believed that, "The Deity sometimes interferes by his particular Providence, and sets aside the Events which would otherwise have been produc'd in the Course of Nature, or by the Free Agency of Man," and at the Constitutional Convention stated that "the longer I live, the more convincing proofs I see of this truth—that God governs in the affairs of men." Thomas Jefferson is perhaps the Founding Father who most clearly exhibits Deistic tendencies, although he generally referred to himself as a Unitarian rather than a Deist. His excerpts of the canonical gospels (now commonly known as the Jefferson Bible) strip all supernatural and dogmatic references from the narrative on Jesus' life. Like Franklin, Jefferson believed in God's continuing activity in human affairs. Thomas Paine is especially noteworthy both for his contributions to the cause of the American Revolution and for his writings in defense of Deism, alongside the criticism of Abrahamic religions. In The Age of Reason (1793–1794) and other writings, he advocated Deism, promoted reason and freethought, and argued against institutionalized religions in general and the Christian doctrine in particular. The Age of Reason was short, readable, and probably the only Deistic treatise that continues to be read and influential today. Historian Mitch Horowitz noted that, "Colonials, at least those of means, had the capacity to participate in a fraternal order that enshrined and protected the individual spiritual search—and believed that the search belonged to no single congregation, doctrine, or dogma." Another important contributor to American Deism was Elihu Palmer (1764–1806), who wrote the "Bible of American Deism", Principles of Nature, in 1801. Palmer is noteworthy for attempting to bring some organization to Deism by founding the "Deistical Society of New York" and other Deistic societies from Maine to Georgia. John Adams held theologically complex views and seemed to take a middle course between Deism and Calvinism, which led him to Unitarianism. In a letter dated December 25, 1813, Adams suggested that the Christian Trinity was a "fabrication" derived from Pythagorean and Platonic philosophies rather than divine revelation, and expressed surprise that theologian Joseph Priestley had overlooked these connections to pre-Christian thought. Adams's faith is often described as Christian Deism, since Unitarianism in his time had expanded to include non-theistic schools of thought. He argued that one's salvation depended on behavior rather than belief. France had its own tradition of religious skepticism and natural theology in the works of Montaigne, Pierre Bayle, and Montesquieu. The most famous of the French Deists was Voltaire, who was exposed to Newtonian science and English Deism during his two-year period of exile in England (1726–1728). When he returned to France, he brought both back with him, and exposed the French reading public (i.e., the aristocracy) to them, in a number of books. French Deists also included Maximilien Robespierre and Jean-Jacques Rousseau. During the French Revolution (1789–1799), the Deistic Cult of the Supreme Being—a direct expression of Robespierre's theological views—was established briefly (just under three months) as the new state religion of France, replacing the deposed Catholic Church and the rival atheistic Cult of Reason. There were over five hundred French Revolutionaries who were deists. These deists do not fit the stereotype of deists because they believed in miracles and often prayed to God. In fact, over seventy of them thought that God miraculously helped the French Revolution win victories over their enemies. Furthermore, over a hundred French Revolutionary deists also wrote prayers and hymns to God. Citizen Devillere was one of the many French Revolutionary deists who believed God did miracles. Devillere said, "God, who conducts our destiny, deigned to concern himself with our dangers. He commanded the spirit of victory to direct the hand of the faithful French, and in a few hours the aristocrats received the attack which we prepared, the wicked ones were destroyed and liberty was avenged." Deism in Germany is not well documented. We know from correspondence with Voltaire that Frederick the Great was a Deist. Immanuel Kant's identification with Deism is controversial. Peter Gay describes Enlightenment Deism as entering slow decline as a recognizable movement in the 1730s. A number of reasons have been suggested for this decline, including: Although Deism has declined in popularity over time, scholars believe that these ideas still have a lingering influence on modern society. One of the major activities of the Deists, biblical criticism, evolved into its own highly technical discipline. Deist rejection of revealed religion evolved into, and contributed to, 19th-century liberal British theology and the rise of Unitarianism. Contemporary Deism Contemporary Deism attempts to integrate classical Deism with modern philosophy and the current state of scientific knowledge. This attempt has produced a wide variety of personal beliefs under the broad classification of belief of "deism." There are a number of subcategories of modern Deism, including monodeism (the default, standard concept of deism), pandeism, panendeism, spiritual deism, process deism, Christian deism, polydeism, scientific deism, and humanistic deism. Some deists see design in nature and purpose in the universe and in their lives. Others see God and the universe in a co-creative process. Some deists view God in classical terms as observing humanity but not directly intervening in our lives, while others see God as a subtle and persuasive spirit who created the world, and then stepped back to observe. In the 1960s, theologian Charles Hartshorne scrupulously examined and rejected both deism and pandeism (as well as pantheism) in favor of a conception of God whose characteristics included "absolute perfection in some respects, relative perfection in all others" or "AR," writing that this theory "is able consistently to embrace all that is positive in either deism or pandeism," concluding that "panentheistic doctrine contains all of deism and pandeism except their arbitrary negations." Charles Taylor, in his 2007 book A Secular Age, showed the historical role of Deism, leading to what he calls an "exclusive humanism". This humanism invokes a moral order whose ontic commitment is wholly intra-human with no reference to transcendence. One of the special achievements of such deism-based humanism is that it discloses new, anthropocentric moral sources by which human beings are motivated and empowered to accomplish acts of mutual benefit. This is the province of a buffered, disengaged self, which is the locus of dignity, freedom, and discipline, and is endowed with a sense of human capability. According to Taylor, by the early 19th century this Deism-mediated exclusive humanism developed as an alternative to Christian faith in a personal God and an order of miracles and mystery. Some critics of Deism have accused adherents of facilitating the rise of nihilism. In Nazi Germany, Gottgläubig (literally: "believing in God") was a Nazi religious term for a form of non-denominationalism practised by those German citizens who had officially left Christian churches but professed faith in some higher power or divine creator. Such people were called Gottgläubige ("believers in God"), and the term for the overall movement was Gottgläubigkeit ("belief in God"); the term denotes someone who still believes in a God, although without having any institutional religious affiliation. These National Socialists were not favourable towards religious institutions of their time, nor did they tolerate atheism of any type within their ranks. The 1943 Philosophical Dictionary defined Gottgläubig as: "official designation for those who profess a specific kind of piety and morality, without being bound to a church denomination, whilst however also rejecting irreligion and godlessness." The Gottgläubigkeit is considered a form of deism, and was "predominantly based on creationist and deistic views". In the 1920 National Socialist Programme of the National Socialist German Workers' Party (NSDAP), Adolf Hitler first mentioned the phrase "Positive Christianity". The Nazi Party did not wish to tie itself to a particular Christian denomination, but with Christianity in general, and sought freedom of religion for all denominations "so long as they do not endanger its existence or oppose the moral senses of the Germanic race" (point 24). When Hitler and the NSDAP got into power in 1933, they sought to assert state control over the churches, on the one hand through the Reichskonkordat with the Roman Catholic Church, and the forced merger of the German Evangelical Church Confederation into the Protestant Reich Church on the other. This policy seems to have gone relatively well until late 1936, when a "gradual worsening of relations" between the Nazi Party and the churches saw the rise of Kirchenaustritt ("leaving the Church"). Although there was no top-down official directive to revoke church membership, some Nazi Party members started doing so voluntarily and put other members under pressure to follow their example. Those who left the churches were designated as Gottgläubige ("believers in God"), a term officially recognised by the Interior Minister Wilhelm Frick on 26 November 1936. He stressed that the term signified political disassociation from the churches, not an act of religious apostasy. The term "dissident", which some church leavers had used up until then, was associated with being "without belief" (glaubenslos), whilst most of them emphasized that they still believed in a God, and thus required a different word. A census in May 1939, six years into the Nazi era, and after the annexation of the mostly Catholic Federal State of Austria and mostly Catholic German-occupied Czechoslovakia into German-occupied Europe, indicates that 54% of the population considered itself Protestant, 41% considered itself Catholic, 3.5% self-identified as Gottgläubig, and 1.5% as "atheist". An early April 2018 report of the Turkish Ministry of Education, titled The Youth is Sliding towards Deism, observed that an increasing number of pupils in İmam Hatip schools was repudiating Islam in favour of Deism (irreligious belief in a creator god). The report's publication generated large-scale controversy in the Turkish press and society at large, as well as amongst conservative Islamic sects, Muslim clerics, and Islamist parties in Turkey. The progressive Muslim theologian Mustafa Öztürk noted the Deistic trend among Turkish people a year earlier, arguing that the "very archaic, dogmatic notion of religion" held by the majority of those claiming to represent Islam was causing "the new generations [to become] indifferent, even distant, to the Islamic worldview." Despite a lack of reliable statistical data, numerous anecdotes and independent surveys appear to point in this direction. Although some commentators claim that the secularization of Turkey is merely a result of Western influence or even an alleged "conspiracy", other commentators, even some pro-government ones, have come to the conclusion that "the real reason for the loss of faith in Islam is not the West but Turkey itself". Though Deism subsided in the United States post-Enlightenment, it never died out entirely. Thomas Edison, for example, was heavily influenced by Thomas Paine's The Age of Reason. Edison defended Paine's "scientific deism", saying, "He has been called an atheist, but atheist he was not. Paine believed in a supreme intelligence, as representing the idea which other men often express by the name of deity." In 1878, Edison joined the Theosophical Society in New Jersey, but according to its founder, Helena Blavatsky, he was not a very active member. In an October 2, 1910, interview in the New York Times Magazine, Edison stated: Nature is what we know. We do not know the gods of religions. And nature is not kind, or merciful, or loving. If God made me—the fabled God of the three qualities of which I spoke: mercy, kindness, love—He also made the fish I catch and eat. And where do His mercy, kindness, and love for that fish come in? No; nature made us—nature did it all—not the gods of the religions. Edison was labeled an atheist for those remarks, and although he did not allow himself to be drawn into the controversy publicly, he clarified himself in a private letter: You have misunderstood the whole article, because you jumped to the conclusion that it denies the existence of God. There is no such denial, what you call God I call Nature, the Supreme intelligence that rules matter. All the article states is that it is doubtful in my opinion if our intelligence or soul or whatever one may call it lives hereafter as an entity or disperses back again from whence it came, scattered amongst the cells of which we are made. He also stated, "I do not believe in the God of the theologians; but that there is a Supreme Intelligence I do not doubt." The 2001 American Religious Identification Survey (ARIS) report estimated that between 1990 and 2001 the number of self-identifying Deists grew from 6,000 to 49,000, representing about 0.02% of the U.S. population at the time. The 2008 ARIS survey found, based on their stated beliefs rather than their religious identification, that 70% of Americans believe in a personal God:[i] roughly 12% are atheists or agnostics, and 12% believe in "a deist or paganistic concept of the Divine as a higher power" rather than a personal God. The term "ceremonial deism" was coined in 1962 by the dean of Yale Law School and American legal scholar Eugene V. Rostow, and has been used since 1984 by the Supreme Court to assess exemptions from the Establishment Clause of the First Amendment to the U.S. Constitution, thought to be expressions of cultural tradition and not earnest invocations of a deity. However, American academic and professor of philosophy Martha Nussbaum remarks that the term does not describe any school of thought within Deism itself. See also References I believe in one God, and no more; and I hope for happiness beyond this life. and (in the Recapitulation) I trouble not myself about the manner of future existence. I content myself with believing, even to positive conviction, that the power that gave me existence is able to continue it, in any form and manner he pleases, either with or without this body; and it appears more probable to me that I shall continue to exist hereafter than that I should have had existence, as I now have, before that existence began. Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_ref-bengio_19-0] | [TOKENS: 1793] |
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ontology] | [TOKENS: 8742] |
Contents Ontology Ontology is the philosophical study of being. It is traditionally understood as the subdiscipline of metaphysics focused on the most general features of reality. As one of the most fundamental concepts, being encompasses all of reality and every entity within it. To articulate the basic structure of being, ontology examines the commonalities among all things and investigates their classification into basic types, such as the categories of particulars and universals. Particulars are unique, non-repeatable entities, such as the person Socrates, whereas universals are general, repeatable entities, like the color green. Another distinction exists between concrete objects existing in space and time, such as a tree, and abstract objects existing outside space and time, like the number 7. Systems of categories aim to provide a comprehensive inventory of reality by employing categories such as substance, property, relation, state of affairs, and event. Ontologists disagree regarding which entities exist at the most basic level. Platonic realism asserts that universals have objective existence, while conceptualism maintains that universals exist only in the mind, and nominalism denies their existence altogether. Similar disputes pertain to mathematical objects, unobservable objects assumed by scientific theories, and moral facts. Materialism posits that fundamentally only matter exists, whereas dualism asserts that mind and matter are independent principles. According to some ontologists, objective answers to ontological questions do not exist, with perspectives shaped by differing linguistic practices. Ontology employs diverse methods of inquiry, including the analysis of concepts and experience, the use of intuitions and thought experiments, and the integration of findings from natural science. Formal ontology investigates the most abstract features of objects, while applied ontology utilizes ontological theories and principles to study entities within specific domains. For example, social ontology examines basic concepts used in the social sciences. Applied ontology is particularly relevant to information and computer science, which develop conceptual frameworks of limited domains. These frameworks facilitate the structured storage of information, such as in a college database tracking academic activities. Ontology is also pertinent to the fields of logic, theology, and anthropology. The origins of ontology lie in the ancient period with speculations about the nature of being and the source of the universe, including ancient Indian, Chinese, and Greek philosophy. In the modern period, philosophers conceived ontology as a distinct academic discipline and coined its name. Definition Ontology is the study of being. It is the branch of philosophy that investigates the nature of existence, the features all entities have in common, and how they are divided into basic categories of being. It aims to discover the foundational building blocks of the world and characterize reality as a whole in its most general aspects.[a] In this regard, ontology contrasts with individual sciences like biology and astronomy, which restrict themselves to a limited domain of entities, such as living entities and celestial phenomena. In some contexts, the term ontology refers not to the general study of being but to a specific ontological theory within this discipline. It can also mean an inventory or a conceptual scheme of a particular domain, such as the ontology of genes. In this context, an inventory is a comprehensive list of elements. A conceptual scheme is a framework of the key concepts and their relationships. Ontology is closely related to metaphysics but the exact relation of these two disciplines is disputed. A traditionally influential characterization asserts that ontology is a subdiscipline of metaphysics. According to this view, metaphysics is the study of various aspects of fundamental reality, whereas ontology restricts itself to the most general features of reality. This view sees ontology as general metaphysics, which is to be distinguished from special metaphysics focused on more specific subject matters, like God, mind, and value. A different conception understands ontology as a preliminary discipline that provides a complete inventory of reality while metaphysics examines the features and structure of the entities in this inventory. Another conception says that metaphysics is about real being while ontology examines possible being or the concept of being. It is not universally accepted that there is a clear boundary between metaphysics and ontology. Some philosophers use both terms as synonyms. The etymology of the word ontology traces back to the ancient Greek terms ὄντως (ontos, meaning 'being') and λογία (logia, meaning 'study of'), literally, 'the study of being'. The ancient Greeks did not use the term ontology, which was coined by philosophers in the 17th century. Basic concepts Being, or existence, is the main topic of ontology. It is one of the most general and fundamental concepts, encompassing all of reality and every entity within it.[b] In its broadest sense, being only contrasts with non-being or nothingness. It is controversial whether a more substantial analysis of the concept or meaning of being is possible. One proposal understands being as a property possessed by every entity. Critics argue that a thing without being cannot have properties. This means that properties presuppose being and cannot explain it. Another suggestion is that all beings share a set of essential features. According to the Eleatic principle, "power is the mark of being", meaning that only entities with causal influence truly exist. A controversial proposal by philosopher George Berkeley suggests that all existence is mental. He expressed this immaterialism in his slogan "to be is to be perceived". Depending on the context, the term being is sometimes used with a more limited meaning to refer only to certain aspects of reality. In one sense, being is unchanging and permanent, in contrast to becoming, which implies change. Another contrast is between being, as what truly exists, and phenomena, as what appears to exist. In some contexts, being expresses the fact that something is while essence expresses its qualities or what it is like. Ontologists often divide being into fundamental classes or highest kinds, called categories of being. Proposed categories include substance, property, relation, state of affairs, and event. They can be used to provide systems of categories, which offer a comprehensive inventory of reality in which every entity belongs to exactly one category. Some philosophers, like Aristotle, say that entities belonging to different categories exist in distinct ways. Others, like John Duns Scotus, insist that there are no differences in the mode of being, meaning that everything exists in the same way. A related dispute is whether some entities have a higher degree of being than others, an idea already found in Plato's work. The more common view in contemporary philosophy is that a thing either exists or not with no intermediary states or degrees. The relation between being and non-being is a frequent topic in ontology. Influential issues include the status of nonexistent objects and why there is something rather than nothing. A central distinction in ontology is between particular and universal entities. Particulars, also called individuals, are unique, non-repeatable entities, like Socrates, the Taj Mahal, and Mars. Universals are general, repeatable entities, like the color green, the form circularity, and the virtue courage. Universals express aspects or features shared by particulars. For example, Mount Everest and Mount Fuji are particulars characterized by the universal mountain. Universals can take the form of properties or relations.[c] Properties describe the characteristics of things. They are features or qualities possessed by an entity. Properties are often divided into essential and accidental properties. A property is essential if an entity must have it; it is accidental if the entity can exist without it. For instance, having three sides is an essential property of a triangle, whereas being red is an accidental property.[d] Relations are ways how two or more entities stand to one another. Unlike properties, they apply to several entities and characterize them as a group. For example, being a city is a property while being east of is a relation, as in "Kathmandu is a city" and "Kathmandu is east of New Delhi". Relations are often divided into internal and external relations. Internal relations depend only on the properties of the objects they connect, like the relation of resemblance. External relations express characteristics that go beyond what the connected objects are like, such as spatial relations. Substances[e] play an important role in the history of ontology as the particular entities that underlie and support properties and relations. They are often considered the fundamental building blocks of reality that can exist on their own, while entities like properties and relations cannot exist without substances. Substances persist through changes as they acquire or lose properties. For example, when a tomato ripens, it loses the property green and acquires the property red. States of affairs are complex particular entities that have several other entities as their components. The state of affairs "Socrates is wise" has two components: the individual Socrates and the property wise. States of affairs that correspond to reality are called facts.[f] Facts are truthmakers of statements, meaning that whether a statement is true or false depends on the underlying facts. Events are particular entities[g] that occur in time, like the fall of the Berlin Wall and the first moon landing. They usually involve some kind of change, like the lawn becoming dry. In some cases, no change occurs, like the lawn staying wet. Complex events, also called processes, are composed of a sequence of events. Concrete objects are entities that exist in space and time, such as a tree, a car, and a planet. They have causal powers and can affect each other, like when a car hits a tree and both are deformed in the process. Abstract objects, by contrast, are outside space and time, such as the number 7 and the set of integers. They lack causal powers and do not undergo changes.[h] The existence and nature of abstract objects remain subjects of philosophical debate. Concrete objects encountered in everyday life are complex entities composed of various parts. For example, a book is made up of two covers and the pages between them. Each of these components is itself constituted of smaller parts, like molecules, atoms, and elementary particles. Mereology studies the relation between parts and wholes. One position in mereology says that every collection of entities forms a whole. According to another view, this is only the case for collections that fulfill certain requirements, for instance, that the entities in the collection touch one another. The problem of material constitution asks whether or in what sense a whole should be considered a new object in addition to the collection of parts composing it. Abstract objects are closely related to fictional and intentional objects. Fictional objects are entities invented in works of fiction. They can be things, like the One Ring in J. R. R. Tolkien's book series The Lord of the Rings, and people, like the Monkey King in the novel Journey to the West. Some philosophers say that fictional objects are abstract objects and exist outside space and time. Others understand them as artifacts that are created as the works of fiction are written. Intentional objects are entities that exist within mental states, like perceptions, beliefs, and desires. For example, if a person thinks about the Loch Ness Monster then the Loch Ness Monster is the intentional object of this thought. People can think about existing and non-existing objects. This makes it difficult to assess the ontological status of intentional objects. Ontological dependence is a relation between entities. An entity depends ontologically on another entity if the first entity cannot exist without the second entity. For instance, the surface of an apple cannot exist without the apple. An entity is ontologically independent if it does not depend on anything else, meaning that it is fundamental and can exist on its own. Ontological dependence plays a central role in ontology and its attempt to describe reality on its most fundamental level. It is closely related to metaphysical grounding, which is the relation between a ground and the facts it explains. An ontological commitment of a person or a theory is an entity that exists according to them. For instance, a person who believes in God has an ontological commitment to God. Ontological commitments can be used to analyze which ontologies people explicitly defend or implicitly assume. They play a central role in contemporary metaphysics when trying to decide between competing theories. For example, the Quine–Putnam indispensability argument defends mathematical Platonism, asserting that numbers exist because the best scientific theories are ontologically committed to numbers. Possibility and necessity are further topics in ontology. Possibility describes what can be the case, as in "it is possible that extraterrestrial life exists". Necessity describes what must be the case, as in "it is necessary that three plus two equals five". Possibility and necessity contrast with actuality, which describes what is the case, as in "Doha is the capital of Qatar". Ontologists often use the concept of possible worlds to analyze possibility and necessity. A possible world is a complete and consistent way how things could have been. For example, Haruki Murakami was born in 1949 in the actual world but there are possible worlds in which he was born at a different date. Using this idea, possible world semantics says that a sentence is possibly true if it is true in at least one possible world. A sentence is necessarily true if it is true in all possible worlds. The field of modal logic provides a precise formalization of the concepts of possibility and necessity. In ontology, identity means that two things are the same. Philosophers distinguish between qualitative and numerical identity. Two entities are qualitatively identical if they have exactly the same features, such as perfect identical twins. This is also called exact similarity and indiscernibility. Numerical identity, by contrast, means that there is only a single entity. For example, if Fatima is the mother of Leila and Hugo then Leila's mother is numerically identical to Hugo's mother. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time. Diachronic identity relates an entity to itself at different times, as in "the woman who bore Leila three years ago is the same woman who bore Hugo this year". The notion of identity also has a number of philosophical implications in terms of how it interacts with the aforementioned necessity and possibility. Most famously, Saul Kripke contended that discovered identities such as "Water is H2O" are necessarily true because "H2O" is what's known as a rigid designator. Branches There are different and sometimes overlapping ways to divide ontology into branches. Pure ontology focuses on the most abstract topics associated with the concept and nature of being. It is not restricted to a specific domain of entities and studies existence and the structure of reality as a whole. Pure ontology contrasts with applied ontology, also called domain ontology. Applied ontology examines the application of ontological theories and principles to specific disciplines and domains, often in the field of science. It considers ontological problems in regard to specific entities such as matter, mind, numbers, God, and cultural artifacts. Social ontology, a major subfield of applied ontology, studies social kinds, like money, gender, society, and language. It aims to determine the nature and essential features of these concepts while also examining their mode of existence. According to a common view, social kinds are useful constructions to describe the complexities of social life. This means that they are not pure fictions but, at the same time, lack the objective or mind-independent reality of natural phenomena like elementary particles, lions, and stars. In the fields of computer science, information science, and knowledge representation, applied ontology is interested in the development of formal frameworks to encode and store information about a limited domain of entities in a structured way. A related application in genetics is Gene Ontology, which is a comprehensive framework for the standardized representation of gene-related information across species and databases. Formal ontology is the study of objects in general while focusing on their abstract structures and features. It divides objects into different categories based on the forms they exemplify. Formal ontologists often rely on the tools of formal logic to express their findings in an abstract and general manner.[i] Formal ontology contrasts with material ontology, which distinguishes between different areas of objects and examines the features characteristic of a specific area. Examples are ideal spatial beings in the area of geometry and living beings in the area of biology. Descriptive ontology aims to articulate the conceptual scheme underlying how people ordinarily think about the world. Prescriptive ontology departs from common conceptions of the structure of reality and seeks to formulate a new and better conceptualization. Another contrast is between analytic and speculative ontology. Analytic ontology examines the types and categories of being to determine what kinds of things could exist and what features they would have. Speculative ontology aims to determine which entities actually exist, for example, whether there are numbers or whether time is an illusion. Metaontology studies the underlying concepts, assumptions, and methods of ontology. Unlike other forms of ontology, it does not ask "what exists" but "what does it mean for something to exist" and "how can people determine what exists". It is closely related to fundamental ontology, an approach developed by philosopher Martin Heidegger that seeks to uncover the meaning of being. Schools of thought The term realism is used for various theories[j] that affirm that some kind of phenomenon is real or has mind-independent existence. Ontological realism is the view that there are objective facts about what exists and what the nature and categories of being are. Ontological realists do not make claims about what those facts are, for example, whether elementary particles exist. They merely state that there are mind-independent facts that determine which ontological theories are true. This idea is denied by ontological anti-realists, also called ontological deflationists, who say that there are no substantive facts one way or the other. According to philosopher Rudolf Carnap, for example, ontological statements are relative to language and depend on the ontological framework of the speaker. This means that there are no framework-independent ontological facts since different frameworks provide different views while there is no objectively right or wrong framework. In a more narrow sense, realism refers to the existence of certain types of entities. Realists about universals say that universals have mind-independent existence. According to Platonic realists, universals exist not only independent of the mind but also independent of particular objects that exemplify them. This means that the universal red could exist by itself even if there were no red objects in the world. Aristotelian realism, also called moderate realism, rejects this idea and says that universals only exist as long as there are objects that exemplify them. Conceptualism, by contrast, is a form of anti-realism, stating that universals only exist in the mind as concepts that people use to understand and categorize the world. Nominalists defend a strong form of anti-realism by saying that universals have no existence. This means that the world is entirely composed of particular objects. Mathematical realism, a closely related view in the philosophy of mathematics, says that mathematical facts exist independently of human language, thought, and practices and are discovered rather than invented. According to mathematical Platonism, this is the case because of the existence of mathematical objects, like numbers and sets. Mathematical Platonists say that mathematical objects are as real as physical objects, like atoms and stars, even though they are not accessible to empirical observation. Influential forms of mathematical anti-realism include conventionalism, which says that mathematical theories are trivially true simply by how mathematical terms are defined, and game formalism, which understands mathematics not as a theory of reality but as a game governed by rules of string manipulation. Modal realism is the theory that in addition to the actual world, there are countless possible worlds as real and concrete as the actual world. The primary difference is that the actual world is inhabited by us while other possible worlds are inhabited by our counterparts. Modal anti-realists reject this view and argue that possible worlds do not have concrete reality but exist in a different sense, for example, as abstract or fictional objects. Scientific realists say that the scientific description of the world is an accurate representation of reality.[k] It is of particular relevance in regard to things that cannot be directly observed by humans but are assumed to exist by scientific theories, like electrons, forces, and laws of nature. Scientific anti-realism says that scientific theories are not descriptions of reality but instruments to predict observations and the outcomes of experiments. Moral realists claim that there exist mind-independent moral facts. According to them, there are objective principles that determine which behavior is morally right. Moral anti-realists either claim that moral principles are subjective and differ between persons and cultures, a position known as moral relativism, or outright deny the existence of moral facts, a view referred to as moral nihilism. Monocategorical theories say that there is only one fundamental category, meaning that every single entity belongs to the same universal class. For example, some forms of nominalism state that only concrete particulars exist while some forms of bundle theory state that only properties exist. Polycategorical theories, by contrast, hold that there is more than one basic category, meaning that entities are divided into two or more fundamental classes. They take the form of systems of categories, which list the highest genera of being to provide a comprehensive inventory of everything. The closely related discussion between monism and dualism is about the most fundamental types that make up reality. According to monism, there is only one kind of thing or substance on the most basic level. Materialism is an influential monist view; it says that everything is material. This means that mental phenomena, such as beliefs, emotions, and consciousness, either do not exist or exist as aspects of matter, like brain states. Idealists take the converse perspective, arguing that everything is mental. They may understand physical phenomena, like rocks, trees, and planets, as ideas or perceptions of conscious minds. Neutral monism occupies a middle ground by saying that both mind and matter are derivative phenomena. Dualists state that mind and matter exist as independent principles, either as distinct substances or different types of properties. In a slightly different sense, monism contrasts with pluralism as a view not about the number of basic types but the number of entities. In this sense, monism is the controversial position that only a single all-encompassing entity exists in all of reality.[l] Pluralism is more commonly accepted and says that several distinct entities exist. The historically influential substance-attribute ontology is a polycategorical theory. It says that reality is at its most fundamental level made up of unanalyzable substances that are characterized by universals, such as the properties an individual substance has or relations that exist between substances. The closely related substratum theory says that each concrete object is made up of properties and a substratum. The difference is that the substratum is not characterized by properties: it is a featureless or bare particular that merely supports the properties. Various alternative ontological theories have been proposed that deny the role of substances as the foundational building blocks of reality. Stuff ontologies say that the world is not populated by distinct entities but by continuous stuff that fills space. This stuff may take various forms and is often conceived as infinitely divisible.[m] According to process ontology, processes or events are the fundamental entities. This view usually emphasizes that nothing in reality is static, meaning that being is dynamic and characterized by constant change. Bundle theories state that there are no regular objects but only bundles of co-present properties. For example, a lemon may be understood as a bundle that includes the properties yellow, sour, and round. According to traditional bundle theory, the bundled properties are universals, meaning that the same property may belong to several different bundles. According to trope bundle theory, properties are particular entities that belong to a single bundle. Some ontologies focus not on distinct objects but on interrelatedness. According to relationalism, all of reality is relational at its most fundamental level.[n] Ontic structural realism agrees with this basic idea and focuses on how these relations form complex structures. Some structural realists state that there is nothing but relations, meaning that individual objects do not exist. Others say that individual objects exist but depend on the structures in which they participate. Fact ontologies present a different approach by focusing on how entities belonging to different categories come together to constitute the world. Facts, also known as states of affairs, are complex entities; for example, the fact that the Earth is a planet consists of the particular object the Earth and the property being a planet. Fact ontologies state that facts are the fundamental constituents of reality, meaning that objects, properties, and relations cannot exist on their own and only form part of reality to the extent that they participate in facts.[o] In the history of philosophy, various ontological theories based on several fundamental categories have been proposed. One of the first theories of categories was suggested by Aristotle, whose system includes ten categories: substance, quantity, quality, relation, place, date, posture, state, action, and passion. An early influential system of categories in Indian philosophy, first proposed in the Vaisheshika school, distinguishes between six categories: substance, quality, motion, universal, individuator, and inherence. Immanuel Kant's transcendental idealism includes a system of twelve categories, which Kant saw as pure concepts of understanding. They are subdivided into four classes: quantity, quality, relation, and modality. In more recent philosophy, theories of categories were developed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe. The dispute between constituent and relational ontologies[p] concerns the internal structure of concrete particular objects. Constituent ontologies say that objects have an internal structure with properties as their component parts. Bundle theories are an example of this position: they state that objects are bundles of properties. This view is rejected by relational ontologies, which say that objects have no internal structure, meaning that properties do not inhere in them but are externally related to them. According to one analogy, objects are like pin-cushions and properties are pins that can be stuck to objects and removed again without becoming a real part of objects. Relational ontologies are common in certain forms of nominalism that reject the existence of universal properties. Hierarchical ontologies state that the world is organized into levels. Entities on all levels are real but low-level entities are more fundamental than high-level entities. This means that they can exist without high-level entities while high-level entities cannot exist without low-level entities. One hierarchical ontology says that elementary particles are more fundamental than the macroscopic objects they compose, like chairs and tables. Other hierarchical theories assert that substances are more fundamental than their properties and that nature is more fundamental than culture. Flat ontologies, by contrast, deny that any entity has a privileged status, meaning that all entities exist on the same level. For them, the main question is only whether something exists rather than identifying the level at which it exists.[q] The ontological theories of endurantism and perdurantism aim to explain how material objects persist through time. Endurantism is the view that material objects are three-dimensional entities that travel through time while being fully present in each moment. They remain the same even when they gain or lose properties as they change. Perdurantism is the view that material objects are four-dimensional entities that extend not just through space but also through time. This means that they are composed of temporal parts and, at any moment, only one part of them is present but not the others. According to perdurantists, change means that an earlier part exhibits different qualities than a later part. When a tree loses its leaves, for instance, there is an earlier temporal part with leaves and a later temporal part without leaves. Differential ontology is a poststructuralist approach interested in the relation between the concepts of identity and difference. It says that traditional ontology sees identity as the more basic term by first characterizing things in terms of their essential features and then elaborating differences based on this conception. Differential ontologists, by contrast, privilege difference and say that the identity of a thing is a secondary determination that depends on how this thing differs from other things. Object-oriented ontology belongs to the school of speculative realism and examines the nature and role of objects. It sees objects as the fundamental building blocks of reality. As a flat ontology, it denies that some entities have a more fundamental form of existence than others. It uses this idea to argue that objects exist independently of human thought and perception. Methods Methods of ontology are ways of conducting ontological inquiry and deciding between competing theories. There is no single standard method; the diverse approaches are studied by metaontology. Conceptual analysis is a method to understand ontological concepts and clarify their meaning. It proceeds by analyzing their component parts and the necessary and sufficient conditions under which a concept applies to an entity. This information can help ontologists decide whether a certain type of entity, such as numbers, exists. Eidetic variation is a related method in phenomenological ontology that aims to identify the essential features of different types of objects. Phenomenologists start by imagining an example of the investigated type. They proceed by varying the imagined features to determine which ones cannot be changed, meaning they are essential.[r] The transcendental method begins with a simple observation that a certain entity exists. In the following step, it studies the ontological repercussions of this observation by examining how it is possible or which conditions are required for this entity to exist. Another approach is based on intuitions in the form of non-inferential impressions about the correctness of general principles. These principles can be used as the foundation on which an ontological system is built and expanded using deductive reasoning. A further intuition-based method relies on thought experiments to evoke new intuitions. This happens by imagining a situation relevant to an ontological issue and then employing counterfactual thinking to assess the consequences of this situation. For example, some ontologists examine the relation between mind and matter by imagining creatures identical to humans but without consciousness. Naturalistic methods rely on the insights of the natural sciences to determine what exists. According to an influential approach by Willard Van Orman Quine, ontology can be conducted by analyzing[s] the ontological commitments of scientific theories. This method is based on the idea that scientific theories provide the most reliable description of reality and that their power can be harnessed by investigating the ontological assumptions underlying them. Principles of theory choice offer guidelines for assessing the advantages and disadvantages of ontological theories rather than guiding their construction. The principle of Ockham's Razor says that simple theories are preferable. A theory can be simple in different respects, for example, by using very few basic types or by describing the world with a small number of fundamental entities. Ontologists are also interested in the explanatory power of theories and give preference to theories that can explain many observations. A further factor is how close a theory is to common sense. Some ontologists use this principle as an argument against theories that are very different from how ordinary people think about the issue. In applied ontology, ontological engineering is the process of creating and refining conceptual models of specific domains. Developing a new ontology from scratch involves various preparatory steps, such as delineating the scope of the domain one intends to model and specifying the purpose and use cases of the ontology. Once the foundational concepts within the area have been identified, ontology engineers proceed by defining them and characterizing the relations between them. This is usually done in a formal language to ensure precision and, in some cases, automatic computability. In the following review phase, the validity of the ontology is assessed using test data. Various more specific instructions for how to carry out the different steps have been suggested. They include the Cyc method, Grüninger and Fox's methodology, and so-called METHONTOLOGY. In some cases, it is feasible to adapt a pre-existing ontology to fit a specific domain and purpose rather than creating a new one from scratch. Related fields Ontology overlaps with many disciplines, including logic, the study of correct reasoning. Ontologists often employ logical systems to express their insights, specifically in the field of formal ontology. Of particular interest to them is the existential quantifier ( ∃ {\displaystyle \exists } ), which is used to express what exists. In first-order logic, for example, the formula ∃ x Dog ( x ) {\displaystyle \exists x{\text{Dog}}(x)} states that dogs exist. Some philosophers study ontology by examining the structure of thought and language, saying that they reflect the structure of being. Doubts about the accuracy of natural language have led some ontologists to seek a new formal language, termed ontologese, for a better representation of the fundamental structure of reality. Ontologies are often used in information science to provide a conceptual scheme or inventory of a specific domain, making it possible to classify objects and formally represent information about them. This is of specific interest to computer science, which builds databases to store this information and defines computational processes to automatically transform and use it. For instance, to encode and store information about clients and employees in a database, an organization may use an ontology with categories such as person, company, address, and name. In some cases, it is necessary to exchange information belonging to different domains or to integrate databases using distinct ontologies. This can be achieved with the help of upper ontologies, which are not limited to one specific domain. They use general categories that apply to most or all domains, like Suggested Upper Merged Ontology and Basic Formal Ontology. Similar applications of ontology are found in various fields seeking to manage extensive information within a structured framework. Protein Ontology is a formal framework for the standardized representation of protein-related entities and their relationships. Gene Ontology and Sequence Ontology serve a similar purpose in the field of genetics. Environment Ontology is a knowledge representation focused on ecosystems and environmental processes. Friend of a Friend provides a conceptual framework to represent relations between people and their interests and activities. The topic of ontology has received increased attention in anthropology since the 1990s, sometimes termed the "ontological turn". This type of inquiry is focused on how people from different cultures experience and understand the nature of being. Specific interest has been given to the ontological outlook of Indigenous people and how it differs from a Western perspective. As an example of this contrast, it has been argued that various indigenous communities ascribe intentionality to non-human entities, like plants, forests, or rivers. This outlook is known as animism and is also found in Native American ontologies, which emphasize the interconnectedness of all living entities and the importance of balance and harmony with nature. Ontology is closely related to theology and its interest in the existence of God as an ultimate entity. The ontological argument, first proposed by Anselm of Canterbury, attempts to prove the existence of the divine. It defines God as the greatest conceivable being. From this definition it concludes that God must exist since God would not be the greatest conceivable being if God lacked existence. Another overlap in the two disciplines is found in ontological theories that use God or an ultimate being as the foundational principle of reality. Heidegger criticized this approach, terming it ontotheology. History The roots of ontology in ancient philosophy are speculations about the nature of being and the source of the universe. Discussions of the essence of reality are found in the Upanishads, ancient Indian scriptures dating from as early as 700 BCE. They say that the universe has a divine foundation and discuss the sense in which ultimate reality is one or many. Samkhya, the first orthodox school of Indian philosophy,[t] formulated an atheistic dualist ontology based on the Upanishads, identifying pure consciousness and matter as its two foundational principles. The later Vaisheshika school[u] proposed a comprehensive system of categories. In ancient China, Laozi's (6th century BCE)[v] Taoism examines the underlying order of the universe, known as Tao, and how this order is shaped by the interaction of two basic forces, yin and yang. The philosophical movement of Xuanxue emerged in the 3rd century CE and explored the relation between being and non-being. Starting in the 6th century BCE, Presocratic philosophers in ancient Greece aimed to provide rational explanations of the universe. They suggested that a first principle, such as water or fire, is the primal source of all things. Parmenides (c. 515–450 BCE) is sometimes considered the founder of ontology because of his explicit discussion of the concepts of being and non-being. Inspired by Presocratic philosophy, Plato (427–347 BCE) developed his theory of forms. It distinguishes between unchangeable perfect forms and matter, which has a lower degree of existence and imitates the forms. Aristotle (384–322 BCE) suggested an elaborate system of categories that introduced the concept of substance as the primary kind of being. The school of Neoplatonism arose in the 3rd century CE and proposed an ineffable source of everything, called the One, which is more basic than being itself. The problem of universals was an influential topic in medieval ontology. Boethius (477–524 CE) suggested that universals can exist not only in matter but also in the mind. This view inspired Peter Abelard (1079–1142 CE), who proposed that universals exist only in the mind. Thomas Aquinas (1224–1274 CE) developed and refined fundamental ontological distinctions, such as the contrast between existence and essence, between substance and accidents, and between matter and form. He also discussed the transcendentals, which are the most general properties or modes of being. John Duns Scotus (1266–1308) argued that all entities, including God, exist in the same way and that each entity has a unique essence, called haecceity. William of Ockham (c. 1287–1347 CE) proposed that one can decide between competing ontological theories by assessing which one uses the smallest number of elements, a principle known as Ockham's razor. In Arabic-Persian philosophy, Avicenna (980–1037 CE) combined ontology with theology. He identified God as a necessary being that is the source of everything else, which only has contingent existence. In 8th-century Indian philosophy, the school of Advaita Vedanta emerged. It says that only a single all-encompassing entity exists, stating that the impression of a plurality of distinct entities is an illusion. Starting in the 13th century CE, the Navya-Nyāya school built on Vaisheshika ontology with a particular focus on the problem of non-existence and negation. 9th-century China saw the emergence of Neo-Confucianism, which developed the idea that a rational principle, known as li, is the ground of being and order of the cosmos. René Descartes (1596–1650) formulated a dualist ontology at the beginning of the modern period. It distinguishes between mind and matter as distinct substances that causally interact. Rejecting Descartes's dualism, Baruch Spinoza (1632–1677) proposed a monist ontology according to which there is only a single entity that is identical to God and nature. Gottfried Wilhelm Leibniz (1646–1716), by contrast, said that the universe is made up of many simple substances, which are synchronized but do not interact with one another. John Locke (1632–1704) proposed his substratum theory, which says that each object has a featureless substratum that supports the object's properties. Christian Wolff (1679–1754) was influential in establishing ontology as a distinct discipline, delimiting its scope from other forms of metaphysical inquiry. George Berkeley (1685–1753) developed an idealist ontology according to which material objects are ideas perceived by minds. Immanuel Kant (1724–1804) rejected the idea that humans can have direct knowledge of independently existing things and their nature, limiting knowledge to the field of appearances. For Kant, ontology does not study external things but provides a system of pure concepts of understanding. Influenced by Kant's philosophy, Georg Wilhelm Friedrich Hegel (1770–1831) linked ontology and logic. He said that being and thought are identical and examined their foundational structures. Arthur Schopenhauer (1788–1860) rejected Hegel's philosophy and proposed that the world is an expression of a blind and irrational will. Francis Herbert Bradley (1846–1924) saw absolute spirit as the ultimate and all-encompassing reality while denying that there are any external relations. In Indian philosophy, Swami Vivekananda (1863–1902) expanded on Advaita Vedanta, emphasizing the unity of all existence. Sri Aurobindo (1872–1950) presented a "realistic Advaita" which understands the world not as an illusion, but as a real, evolutionary manifestation of a divine consciousness. At the beginning of the 20th century, Edmund Husserl (1859–1938) developed phenomenology and employed its method, the description of experience, to address ontological problems. This idea inspired his student Martin Heidegger (1889–1976) to clarify the meaning of being by exploring the mode of human existence. Jean-Paul Sartre responded to Heidegger's philosophy by examining the relation between being and nothingness from the perspective of human existence, freedom, and consciousness. Based on the phenomenological method, Nicolai Hartmann (1882–1950) developed a complex hierarchical ontology that divides reality into four levels: inanimate, biological, psychological, and spiritual. Alexius Meinong (1853–1920) articulated a controversial ontological theory that includes nonexistent objects as part of being. Arguing against this theory, Bertrand Russell (1872–1970) formulated a fact ontology known as logical atomism. This idea was further refined by the early Ludwig Wittgenstein (1889–1951) and inspired D. M. Armstrong's (1926–2014) ontology. Alfred North Whitehead (1861–1947), by contrast, developed a process ontology. Rudolf Carnap (1891–1970) questioned the objectivity of ontological theories by claiming that what exists depends on one's linguistic framework. He had a strong influence on Willard Van Orman Quine (1908–2000), who analyzed the ontological commitments of scientific theories to solve ontological problems. Quine's student David Lewis (1941–2001) formulated the position of modal realism, which says that possible worlds are as real and concrete as the actual world. Since the end of the 20th century, interest in applied ontology has risen in computer and information science with the development of conceptual frameworks for specific domains. See also References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.