text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Twilight] | [TOKENS: 3235]
Contents Twilight Twilight is daylight illumination produced by diffuse sky radiation when the Sun is below the horizon as sunlight from the upper atmosphere is scattered in a way that illuminates both the Earth's lower atmosphere and also the Earth's surface. Twilight also may be any period when this illumination occurs, including dawn and dusk. The lower the Sun is beneath the horizon, the dimmer the sky (other factors such as atmospheric conditions being equal). When the Sun reaches 18° below the horizon, the illumination emanating from the sky is nearly zero, and evening twilight becomes nighttime. When the Sun approaches re-emergence, reaching 18° below the horizon, nighttime becomes morning twilight. Owing to its distinctive quality, primarily the absence of shadows and the appearance of objects silhouetted against the lit sky, twilight has long been popular with photographers and painters, who often refer to it as the blue hour, after the French expression l'heure bleue. By analogy with evening twilight, sometimes twilight is used metaphorically to imply that something is losing strength and approaching its end. For example, very old people may be said to be "in the twilight of their lives". The collateral adjective for twilight is crepuscular, which may be used to describe the behavior of animals that are most active during this period. Definitions by geometry Twilight occurs according to the solar elevation angle θs, which is the position of the geometric center of the Sun relative to the horizon. There are three established and widely accepted subcategories of twilight: civil twilight (nearest the horizon), nautical twilight, and astronomical twilight (farthest from the horizon). Civil twilight Civil twilight is the period of time for which the geometric center of the Sun is between the horizon and 6° below the horizon. Civil twilight is the period when enough natural light remains so that artificial light in towns and cities is not needed. In the United States' military, the initialisms BMCT (begin morning civil twilight, i.e., civil dawn) and EECT (end evening civil twilight, i.e., civil dusk) are used to refer to the start of morning civil twilight and the end of evening civil twilight, respectively. Civil dawn is preceded by morning nautical twilight and civil dusk is followed by evening nautical twilight. Under clear weather conditions, civil twilight approximates the limit at which solar illumination suffices for the human eye to clearly distinguish terrestrial objects. Enough illumination renders artificial sources unnecessary for most outdoor activities. At civil dawn and at civil dusk, sunlight clearly defines the horizon while the brightest stars and planets can appear. As observed from the Earth (see apparent magnitude), sky-gazers know Venus, the brightest planet, as the "morning star" or "evening star" because they can see it during civil twilight. Although civil dawn marks the time of the first appearance of civil twilight before sunrise, and civil dusk marks the time of the first disappearance of civil twilight after sunset, civil twilight statutes typically denote a fixed period after sunset or before sunrise (most commonly 20–30 minutes) rather than how many degrees the Sun is below the horizon. Examples include when drivers of automobiles must turn on their headlights (called lighting-up time in the UK), when hunting is restricted, or when the crime of burglary is to be treated as nighttime burglary, which carries stiffer penalties in some jurisdictions. The period may affect when extra equipment, such as anti-collision lights, is required for aircraft to operate. In the US, civil twilight for aviation is defined in Part 1.1 of the Federal Aviation Regulations (FARs) as the time listed in the American Air Almanac. Nautical twilight Nautical twilight occurs when the geometric center of the Sun is between 12° and 6° below the horizon. After nautical dusk and before nautical dawn, sailors cannot navigate via the horizon at sea as they cannot clearly see the horizon. At nautical dawn and nautical dusk, the human eye finds it difficult, if not impossible, to discern traces of illumination near the sunset or sunrise point of the horizon (first light after nautical dawn but before civil dawn and nightfall after civil dusk but before nautical dusk). Sailors can take reliable star sightings of well-known stars, during the stage of nautical twilight when they can distinguish a visible horizon for reference (i.e. after nautical dawn or before nautical dusk). Under good atmospheric conditions with the absence of other illumination, during nautical twilight, the human eye may distinguish general outlines of ground objects but cannot participate in detailed outdoor operations. Nautical twilight has military considerations as well. The initialisms BMNT (begin morning nautical twilight, i.e. nautical dawn) and EENT (end evening nautical twilight, i.e. nautical dusk) are used and considered when planning military operations. A military unit may treat BMNT and EENT with heightened security, e.g. by "standing to", for which everyone assumes a defensive position. Astronomical twilight Astronomical twilight is defined as when the geometric center of the Sun is between 18° and 12° below the horizon. During astronomical twilight, the sky is dark enough to permit astronomical observation of point sources of light such as stars, except in regions with more intense skyglow due to light pollution, moonlight, auroras, and other sources of light. Some critical observations, such as of faint diffuse items such as nebulae and galaxies, may require observation beyond the limit of astronomical twilight. Theoretically, the faintest stars detectable by the naked eye (those of approximately the sixth magnitude) will become visible in the evening at astronomical dusk, and become invisible at astronomical dawn. Times of occurrence Observers within about 48°34' of the Equator can view twilight twice each day on every date of the year between astronomical dawn, nautical dawn, or civil dawn, and sunrise as well as between sunset and civil dusk, nautical dusk, or astronomical dusk. This also occurs for most observers at higher latitudes on many dates throughout the year, except those around the summer solstice. However, at latitudes closer than 8°35' (between 81°25’ and 90°) to either Pole, the Sun cannot rise above the horizon nor sink more than 18° below it on the same day on any date, so this example of twilight cannot occur because the angular difference between solar noon and solar midnight is less than 17°10’. Observers within 63°26' of the Equator can view twilight twice each day on every date between the month of the autumnal equinox and the month of vernal equinox between astronomical dawn, nautical dawn, or civil dawn, and sunrise as well as between sunset and civil dusk, nautical dusk, or astronomical dusk, i.e., from September 1 to March 31 of the following year in the Northern Hemisphere and from March 1 to September 30 in the Southern Hemisphere. The maximum latitude to view both astronomical dawn to sunrise and from sunset to astronomical dusk in the entire year is 48º33'43". The maximum latitude to view both astronomical dawn to sunrise and from sunset to astronomical dusk in the months that includes autumn and winter is 63º26'07". The nighttime/twilight boundary solar midnight's latitude varies depending on the month: At latitudes greater than about 48°34' North or South, on dates near the summer solstice (June 21 in the Northern Hemisphere or December 21 in the Southern Hemisphere), twilight can last from sunset to sunrise, since the Sun does not sink more than 18 degrees below the horizon, so complete darkness does not occur even at solar midnight. These latitudes include many densely populated regions of the Earth, including the entire United Kingdom and other countries in northern Europe and even parts of central Europe. This also occurs in the Southern Hemisphere, but occurs on December 21. This type of twilight also occurs between one day and the next at latitudes within the polar circles shortly before and shortly after the period of midnight sun. The summer solstice in the Northern Hemisphere is on June 21st, while the summer solstice in the Southern Hemisphere is on December 21st. In Arctic and Antarctic latitudes in wintertime, the polar night only rarely produces complete darkness for 24 hours each day. This can occur only at locations within about 5.5 degrees of latitude of the Pole, and there only on dates close to the winter solstice. At all other latitudes and dates, the polar night includes a daily period of twilight, when the Sun is not far below the horizon. Around winter solstice, when the solar declination changes slowly, complete darkness lasts several weeks at the Pole itself, e.g., from May 11 to July 31 at Amundsen–Scott South Pole Station.[a] North Pole has the experience of this from November 13 to January 29. Solar noon at civil twilight during a polar night: between about 67°24' and 72°34' north or south. Solar noon at nautical twilight during a polar night: between about 72°34' and 78°34' north or south. Solar noon at astronomical twilight during a polar night: between about 78°34' and 84°34' north or south. Solar noon at night during a polar night: between approximately 84°34' and exactly 90° north or south. At latitudes greater than 81°25' North or South, as the Sun's angular elevation difference is less than 18 degrees, twilight can last for the entire 24 hours. This occurs for one day at latitudes near 8°35’ from the Pole and extends up to several weeks the further toward the Pole one goes. This happens both near the North Pole and near the South Pole. The only permanent settlement to experience this condition is Alert, Nunavut, Canada, where it occurs from February 22–26, and again from October 15–19. Duration The duration of twilight depends on the latitude and the time of the year. The apparent travel of the Sun occurs at the rate of 15 degrees per hour (360° per day), but sunrise and sunset happen typically at oblique angles to the horizon and the actual duration of any twilight period will be a function of that angle, being longer for more oblique angles. This angle of the Sun's motion with respect to the horizon changes with latitude as well as the time of year (affecting the angle of the Earth's axis with respect to the Sun). At Greenwich, England (51.5°N), the duration of civil twilight will vary from 33 minutes to 48 minutes, depending on the time of year. At the equator, civil twilight can last as little as 24 minutes. This is true because at low latitudes the Sun's apparent movement is perpendicular to the observer's horizon. But at the poles, civil twilight can be as long as 2–3 weeks. In the Arctic and Antarctic regions, twilight (if there is any) can last for several hours. There is no astronomical twilight at the poles near the winter solstice (for about 74 days at the North Pole and about 80 days at the South Pole). As one gets closer to the Arctic and Antarctic circles, the Sun's disk moves toward the observer's horizon at a lower angle. The observer's earthly location will pass through the various twilight zones less directly, taking more time. Within the polar circles, twenty-four-hour daylight is encountered in summer, and in regions very close to the poles, twilight can last for weeks on the winter side of the equinoxes. Outside the polar circles, where the angular distance from the polar circle is less than the angle which defines twilight (see above), twilight can continue through local midnight near the summer solstice. The precise position of the polar circles, and the regions where twilight can continue through local midnight, varies slightly from year to year with Earth's axial tilt. The lowest latitudes at which the various twilights can continue through local midnight are approximately 60.561° (60°33′43″) for civil twilight, 54.561° (54°33′43″) for nautical twilight and 48.561° (48°33′43″) for astronomical twilight. These are the largest cities of their respective countries where the various twilights can continue through local solar midnight: Although Helsinki, Oslo, Stockholm, Tallinn, and Saint Petersburg also enter into nautical twilight after sunset, they do have noticeably lighter skies at night during the summer solstice than other locations mentioned in their category above, because they do not go far into nautical twilight. A white night is a night with only civil twilight which lasts from sunset to sunrise. At the winter solstice within the polar circle, twilight can extend through solar noon at latitudes below 72.561° (72°33′43″) for civil twilight, 78.561° (78°33′43″) for nautical twilight, and 84.561° (84°33′43″) for astronomical twilight. On other planets Twilight on Mars is longer than on Earth, lasting for up to two hours before sunrise or after sunset. Dust high in the atmosphere scatters light to the night side of the planet. Similar twilights are seen on Earth following major volcanic eruptions. In culture In Christian practice, "vigil" observances often occur during twilight on the evening before major feast days or holidays. For example, the Easter Vigil is held in the hours of darkness between sunset on Holy Saturday and sunrise on Easter Day – most commonly in the evening of Holy Saturday or midnight – and is the first celebration of Easter, days traditionally being considered to begin at sunset.[citation needed] Hinduism prescribes the observance of certain practices during twilight, a period generally called sandhya. The period is also called by the poetic form of gōdhūḷi in Sanskrit, literally 'cow dust', referring to the time cows returned from the fields after grazing, kicking up dust in the process. Many rituals, such as Sandhyavandanam and puja, are performed at the twilight hour. Consuming food is not advised during this time. According to some adherents, asuras are regarded to be active during these hours. One of the avatars of Vishnu, Narasimha, is closely associated with the twilight period. According to Hindu scriptures, an asura king, Hiranyakashipu, performed penance and obtained a boon from Brahma that he could not be killed during day or night, neither by human nor animal, neither inside his house nor outside. Vishnu appeared in a half-man half-lion form (neither human nor animal), and ended Hiranyakashipu's life at twilight (neither day nor night) while he was placed in the threshold of his house (neither inside nor outside). Twilight is important in Islam as it determines when certain universally obligatory prayers are to be recited. Morning twilight is when morning prayers (Fajr) are done, while evening twilight is the time for evening prayers (Maghrib prayer). Also during Ramadhan, the time for suhoor (morning meal before fasting) ends at morning twilight, while fasting ends after sunset. There is also an important discussion in Islamic jurisprudence between "true dawn" and "false dawn".[citation needed] In Judaism, twilight is considered neither day nor night; consequently it is treated as a safeguard against encroachment upon either. It can be considered a liminal time. For example, the twilight of Friday is reckoned as Sabbath eve, and that of Saturday as Sabbath day; and the same rule applies to festival days. See also References Footnotes Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Sociology_of_human_consciousness] | [TOKENS: 1963]
Contents Sociology of human consciousness 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias The sociology of human consciousness or the sociology of consciousness uses the theories and methodology of sociology to explore and examine consciousness. Overview The foundations of this work may be traced to philosopher and sociologist George Herbert Mead, whose work provided major insights into the formation of mind, concepts of self and other, and the internalization of society in individual social beings, viewing these as emerging out of human interaction and communication. Recent work brings such a sociological and social psychological perspective to bear on several key aspects of consciousness, and in doing so inverts explanation: starting from collective phenomena, one ends up analyzing individual consciousness. In making this inversion, they do not totally reject reductionist approaches—nor deny their value in identifying the "hardware" through which collective and social psychological processes operate. However, they would reject the idea that a complete explanation can be formulated on the basis of either purely sociological mechanisms or underlying physical, chemical, neurological, hormonal, or psychological factors and processes. For a critique of reductionism from the perspective of modern physics and biology see Morowitz (1981). The biological and bio-physical bases of human life are recognized. However, these approaches cannot be relied on entirely. In part, the level of analysis is misdirected when it comes to some classes of consciousness phenomena; most of the natural science approaches focus on the wrong levels and the wrong factors with which to explain some of the most mysterious and paradoxical features of human consciousness. Theory The sociological approach emphasizes the importance of language, collective representations, self-conceptions, and self-reflectivity. This theoretical approach argues that the shape and feel of human consciousness is heavily social, and this is no less true of our experiences of "collective consciousness" than it is of our experiences of individual consciousness. The theory suggests that the problem of consciousness can be approached fruitfully by beginning with the human group and collective phenomena: community, language, language-based communication, institutional, and cultural arrangements. A collective is a group or population of individuals that possesses or develops through communication collective representations or models of "we" as opposed to "them": a group, community, organization, or nation is contrasted to "other"; its values and goals, its structure and modes of operating, its relation to its environment and other agents, its potentialities and weaknesses, strategies and developments, and so on. A collective has the capacity in its collective representations and communications about what characterizes it, or what (and how) this self perceives, judges, or does, or what it can (and cannot) do, or should do (or should not do). It monitors its activities, its achievements and failures, and also to a greater or lesser extent, analyzes and discusses itself as a defined and developing collective agent. This is what is meant by self-reflectivity. Such reflectivity is encoded in language and developed in conversations about collective selves (as we discuss below, there are also conversations about the selves of individuals, defining, justifying, and stigmatizing them). Human consciousness in at least one major sense is a type of reflective activity. It entails the capacity to observe, monitor, judge, and decide about the collective self. This is a basis for maintaining a particular collective as it is understood or represented; it is a basis for re-orienting and re-organizing the collective self in response to performance failings or profound crisis (economic, political, cultural). Collective reflectivity emerges then as a function of a group or organization producing and making use of collective representations of the self in its discussions, critical reflections, planning, and actions. Individual consciousness is the normal outcome of processes of collective naming, classifying, monitoring, judging, and reflecting on the individual members of the group or organization. And an individual in a collective context learns to participate in discussions and discourses about "themself", that is, group reflections on themself, their appearance, their orientations and attitudes, their strategies and conduct. Thus, an individual learns (in line with George Herbert Mead's earlier formulations) a naming and classification of themself (self-description and identity) and a characterization of their judgments, actions, and predispositions. In acquiring a language and conceptual framework for this mode of activity along with experience and skills in reflective discussion they develop a capability of inner reflection and inner dialog about themself. These are characteristic features of a particular type of individual "consciousness". This conception points up the socially constructed character of key properties of the human mind, realized through processes of social interaction and social construction. In sum, individual self-representation, self-reference, self-reflectivity, and experiences of consciousness, derive from the collective experience. Self-reflectivity as a type of consciousness often facilitates critical examination and re-construction of selves, collective as well as individual. This plays an essential role in human communities (as well as individual beings) in the face of systematic or highly risky performance failures or new types of problems. Through self-reflection, agents may manage in the course of directed problem-solving to develop more effective institutional arrangements, for instance, large-scale means of social coordination such as administration, democratic association, or markets. Relationship with social organization Language-based collective representations of the past as well as of the future, enable agents to escape the present, to enter into future as well as past imagined worlds, and to reflect together on these worlds. Moreover, in relation to the past, present, and future, the agents may generate alternative representations. These alternative constructions imagined, discussed, struggled over, and tested, make for the generation of variety, a major input into evolutionary processes, as discussed elsewhere. Such variety may also lead to social conflicts, as agents disagree about representations, or oppose the implications or remedies to problems proposed by particular agents. This opens the way for political struggles about alternative conceptions and solutions (where democratic politics entails at times collective self-reflectivity par excellence). In general, such processes enhance the collective capacity to deal with new challenges and crises. Thus, a collective has potentially a rich basis not only for talking about, discussing, agreeing (or disagreeing) about a variety of objects including the "collective self" as well as particular "individual selves"; but it also has a means to conceptualize and develop alternative types of social relationships, effective forms of leadership, coordination and control, and, in general, new normative orders and institutional arrangements. Collectives can even develop their potentialities for collective representation and self-reflectivity, for instance through innovations in information and accounting systems and processes of social accountability. These potentialities enable systematic, directed problem-solving, and the generation of variety and complex strategies. In particular selective environments, these make for major evolutionary advantages. The powerful tool of collective reflectivity must be seen as a double-edged sword in relation to expanding freedom of opportunity and variability, on the one hand, and, on the other, imposing particular constraints and limiting variability. Collective representations and reflectivity and directed problem-solving based on them may prevent human groups from experiencing or discovering the un-represented and the unnamed; unrecognized or poorly defined problems cannot be dealt with (as discussed elsewhere, for instance, in the case of failures of accounting systems to recognize or take into account important social and environmental conditions and developments). Reflective and problem-solving powers may then be distorted, the generation of alternatives and varieties narrow and largely ineffective, and social innovation and transformation misdirected and possibly self-destructive. Thus, the presumed evolutionary advantages of human reflectivity must be qualified or viewed as conditional. Outlooks In sum, recent research, building on the work of George Herbert Mead, suggests that a sociological and social psychological perspective can be a point of departure with which to define and analyze certain forms of human consciousness, or more precisely, one class of consciousness phenomena, namely verbalized reflectivity: monitoring, discussing, judging and re-orienting and re-organizing self; representing and analyzing what characterizes the self, what self perceives, judges, could do, should do (or should not do). The "hard problem" of consciousness can be approached fruitfully by beginning with the human group and collective phenomena: community, language, language-based communication, institutional and cultural arrangements, collective representations, self-conceptions, and self-referentiality. Collective reflectivity emerges as a function of an organization or group producing and making use of collective representations of the self ("we", our group, community, organization, nation) in its discussions, critical reflections, and decision-making. A collective monitors and discusses its activities, achievements and failures, and reflects on itself as a defined, acting, and developing collective being. This reflectivity is encoded in language and developed in conversations about collective (as well as individual) selves. Individual consciousness is seen as deriving from the processes of collective naming, classifying, monitoring, judging, reflecting on, and conducting discussions and discourses about, the individual themself. In acquiring a language and conceptual framework for this mode of activity—along with skills and experiences in reflective discussion—they develop a capability of inner reflection and inner discourse about self, which are characteristic features of individual consciousness. One can also distinguish multiple modes of individual awareness and consciousness, distinguishing awareness from consciousness proper, and also identifying pre- and sub-conscious levels. This points up the complexity of the human mind, in part because of its elaboration through processes of social interaction and construction. Bibliography References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Alma-0] | [TOKENS: 321]
Contents Alma-0 Alma-0 is a multi-paradigm computer programming language. This language is an augmented version of the imperative Modula-2 language with logic-programming features and convenient backtracking ability. It is small, strongly typed, and combines constraint programming, a limited number of features inspired by logic programming and supports imperative paradigms. The language advocates declarative programming. The designers claim that search-oriented solutions built with it are substantially simpler than their counterparts written in purely imperative or logic programming style. Alma-0 provides natural, high-level constructs for building search trees. Overview Since the designers of Alma-0 wanted to create a distinct and substantially simpler proposal than prior attempts to integrate declarative programming constructs (such as automatic backtracking) into imperative programming, the design of Alma-0 was guided by four principles: Alma-0 can be viewed not only as a specific and concrete programming language proposal, but also as an example of a generic method for extending any imperative programming language with features that support declarative programming. The feasibility of the Alma-0 approach has been demonstrated through a full implementation of the language (including a description of its semantics) for a subset of Modula-2. Features The implemented features in Alma-0 include: Imperative and logic programming modes The Alma-0 designers claim that the assignment, which is usually shunned in pure declarative and logic programming, is actually needed in a number of natural situations, including for counting and recording purposes. They also affirm that the means of expression of such "natural" uses of assignment within the logic programming paradigm are unnatural. References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-python.org-18] | [TOKENS: 4314]
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer_cluster] | [TOKENS: 3475]
Contents Computer cluster A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing. The components of a cluster are usually connected to each other through fast local area networks, with each node (computer used as a server) running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware[better source needed] and the same operating system, although in some setups (e.g. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on each computer, or different hardware. Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. Computer clusters emerged as a result of the convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performance distributed computing.[citation needed] They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia. Prior to the advent of clusters, single-unit fault tolerant mainframes with modular redundancy were employed; but the lower upfront cost of clusters, and increased speed of network fabric has favoured the adoption of clusters. In contrast to high-reliability mainframes, clusters are cheaper to scale out, but also have increased complexity in error handling, as in clusters error modes are not opaque to running programs. Basic concepts The desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations. The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast local area network. The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept. Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer-to-peer or grid computing which also use many nodes, but with a far more distributed nature. A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high-performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer. The developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a relatively low cost. Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. The TOP500 organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture. History Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup. Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law. The history of early computer clusters is more or less directly tied to the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster. The first production system designed as a cluster was the Burroughs B5700 in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation. The first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" (ARC) system, developed in 1977, and using ARCnet as the cluster interface. Clustering per se did not really take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VMS operating system. The ARC and VAXcluster products not only supported parallel computing, but also shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem NonStop (a 1976 high-availability commercial product) and the IBM S/390 Parallel Sysplex (circa 1994, primarily for business use). Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer. Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, and introduced internal parallelism via vector processing. While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K computer) relied on cluster architectures. Attributes of clusters Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use a high-availability approach. Note that the attributes described below are not exclusive and a "computer cluster" may also use a high-availability approach, etc. "Load-balancing" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized. However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simple round-robin method by assigning each new request to a different node. Computer clusters are used for computation-intensive purposes, rather than handling IO-oriented operations such as web service or databases. For instance, a computer cluster might support computational simulations of vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach "supercomputing". "High-availability clusters" (also known as failover clusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundant nodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure. There are commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux operating system. Benefits Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance (the ability of a system to continue operating despite a malfunctioning node) enables scalability, and in high-performance situations, allows for a low frequency of maintenance routines, resource consolidation (e.g., RAID), and centralized management. Advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity. In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up a single node in the cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers. When adding a new node to a cluster, reliability increases because the entire cluster does not need to be taken down. A single node can be taken down for maintenance, while the rest of the cluster takes on the load of that individual node. If a large number of computers are clustered together, this lends itself to the use of distributed file systems and RAID, both of which can increase the reliability and speed of a cluster. Design and configuration One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network, is densely located, and probably has homogeneous nodes. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approaching grid computing. In a Beowulf cluster, the application programs never see the computational nodes (also called slave computers) but only interact with the "Master" which is a specific computer handling the scheduling and management of the slaves. In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization. The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have a large and shared file server that stores global persistent data, accessed by the slaves as needed. A special purpose 144-node DEGIMA cluster is tuned to running astrophysical N-body simulations using the Multiple-Walk parallel tree code, rather than general purpose scientific computations. Due to the increasing computing power of each generation of game consoles, a novel use has emerged where they are repurposed into High-performance computing (HPC) clusters. Some examples of game console clusters are Sony PlayStation clusters and Microsoft Xbox clusters. Another example of consumer game product is the Nvidia Tesla Personal Supercomputer workstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost). Computer clusters have historically run on separate physical computers with the same operating system. With the advent of virtualization, the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar.[citation needed][clarification needed] The cluster may also be virtualized on various configurations as maintenance takes place; an example implementation is Xen as the virtualization manager with Linux-HA. Data sharing and communication As the computer clusters were appearing during the 1980s, so were supercomputers. One of the elements that distinguished the three classes at that time was that the early supercomputers relied on shared memory. Clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it. However, the use of a clustered file system is essential in modern computer clusters.[citation needed] Examples include the IBM General Parallel File System, Microsoft's Cluster Shared Volumes or the Oracle Cluster File System. Two widely used approaches for communication between cluster nodes are MPI (Message Passing Interface) and PVM (Parallel Virtual Machine). PVM was developed at the Oak Ridge National Laboratory around 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides a run-time environment for message-passing, task and resource management, and fault notification. PVM can be used by user programs written in C, C++, or Fortran, etc. MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported by ARPA and National Science Foundation. Rather than starting anew, the design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically use TCP/IP and socket connections. MPI is now a widely available communications model that enables parallel programs to be written in languages such as C, Fortran, Python, etc. Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such as MPICH and Open MPI. Cluster management One of the challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines, if the cluster has N nodes. In some cases this provides an advantage to shared memory architectures with lower administration costs. This has also made virtual machines popular, due to the ease of administration. When a large multi-user cluster needs to access very large amounts of data, task scheduling becomes a challenge. In a heterogeneous CPU-GPU cluster with a complex application environment, the performance of each job depends on the characteristics of the underlying cluster. Therefore, mapping tasks onto CPU cores and GPU devices provides significant challenges. This is an area of ongoing research; algorithms that combine and extend MapReduce and Hadoop have been proposed and studied. When a node in a cluster fails, strategies such as "fencing" may be employed to keep the rest of the system operational. Fencing is the process of isolating a node or protecting shared resources when a node appears to be malfunctioning. There are two classes of fencing methods; one disables a node itself, and the other disallows access to resources such as shared disks. The STONITH method stands for "Shoot The Other Node In The Head", meaning that the suspected node is disabled or powered off. For instance, power fencing uses a power controller to turn off an inoperable node. The resources fencing approach disallows access to resources without powering off the node. This may include persistent reservation fencing via the SCSI3, fibre channel fencing to disable the fibre channel port, or global network block device (GNBD) fencing to disable access to the GNBD server. Software development and administration Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achieving task parallelism without multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition "the same computation" among several nodes. Automatic parallelization of programs remains a technical challenge, but parallel programming models can be used to effectuate a higher degree of parallelism via the simultaneous execution of separate portions of a program on different processors. Developing and debugging parallel programs on a cluster requires parallel language primitives and suitable tools such as those discussed by the High Performance Debugging Forum (HPDF) which resulted in the HPD specifications. Tools such as TotalView were then developed to debug parallel implementations on computer clusters which use Message Passing Interface (MPI) or Parallel Virtual Machine (PVM) for message passing. The University of California, Berkeley Network of Workstations (NOW) system gathers cluster data and stores them in a database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters. Application checkpointing can be used to restore a given state of the system when a node fails during a long multi-node computation. This is essential in large clusters, given that as the number of nodes increases, so does the likelihood of node failure under heavy computational loads. Checkpointing can restore the system to a stable state so that processing can resume without needing to recompute results. Implementations The Linux world supports various cluster software; for application clustering, there is distcc, and MPICH. Linux Virtual Server, Linux-HA – director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes. MOSIX, LinuxPMI, Kerrighed, OpenSSI are full-blown clusters integrated into the kernel that provide for automatic process migration among homogeneous nodes. OpenSSI, openMosix and Kerrighed are single-system image implementations. Microsoft Windows computer cluster Server 2003 based on the Windows Server platform provides pieces for high-performance computing like the job scheduler, MSMPI library and management tools. gLite is a set of middleware technologies created by the Enabling Grids for E-sciencE (EGEE) project. slurm is also used to schedule and manage some of the largest supercomputer clusters (see top500 list). Other approaches Although most computer clusters are permanent fixtures, attempts at flash mob computing have been made to build short-lived clusters for specific computations. However, larger-scale volunteer computing systems such as BOINC-based systems have had more followers. See also Basic concepts Distributed computing Specific systems Computer farms References Further reading External links
========================================
[SOURCE: https://techcrunch.com/2026/02/15/the-great-computer-science-exodus-and-where-students-are-going-instead/] | [TOKENS: 1121]
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us The great computer science exodus (and where students are going instead) Something strange happened at University of California campuses this fall. For the first time since the dot-com crash, computer science enrollment dropped. System-wide, it fell 6% last year after declining 3% in 2024, according to reporting this past week by the San Francisco Chronicle. Even as overall college enrollment climbed 2% nationally — according to January data from the National Student Clearinghouse Research Center — students are bailing on traditional CS degrees. The one exception is UC San Diego — the only UC campus that added a dedicated AI major this fall. This all might look like a temporary blip tied to news about fewer CS grads finding work out of college. But it’s more likely an indicator of the future, one that China is much more enthusiastically embracing. As MIT Technology Review reported last July, Chinese universities have leaned hard into AI literacy, treating AI not as a threat but instead as essential infrastructure. Nearly 60% of Chinese students and faculty now use AI tools multiple times daily, and schools like Zhejiang University have made AI coursework mandatory, while top institutions like Tsinghua have created entirely new interdisciplinary AI colleges. In China, fluency with AI isn’t optional anymore; it’s table stakes. U.S. universities are scrambling to catch up. Over the last two years, dozens have launched AI-specific programs. MIT’s “AI and decision-making” major is now the second-largest major on campus, says the school. As reported by the New York Times in December, the University of South Florida enrolled more than 3,000 students in a new AI and cybersecurity college during its fall semester. The University at Buffalo last summer launched a new “AI and Society” department that offers seven new, specialized undergraduate degree programs, and it received more than 200 applicants before it swung open its doors. The transition hasn’t been smooth everywhere. When I spoke with UNC Chapel Hill Chancellor Lee Roberts in October, he described a spectrum — some faculty “leaning forward” with AI, others with “their heads in the sand.” Roberts, a former finance executive who arrived from outside academia, was pushing hard for AI integration despite faculty resistance. A week earlier, UNC had announced it would merge two schools to create an AI-focused entity — a decision that drew faculty pushback. Roberts had also appointed a vice provost specifically for AI. “No one’s going to say to students after they graduate, ‘Do the best job you can, but if you use AI, you’ll be in trouble,’” Roberts told me. “Yet we have faculty members effectively saying that right now.” Parents are playing a role in this rocky transition, too. David Reynaldo, who runs the admissions consultancy College Zoom, told the Chronicle that parents who once pushed kids toward CS are now reflexively steering them toward other majors that seem more resistant to AI automation, including mechanical and electrical engineering. But the enrollment numbers suggest students are voting with their feet. According to a survey in October by the nonprofit Computing Research Association — its members include computer science and computer engineering departments from a wide range of universities — 62% of respondents reported that their computing programs saw undergraduate enrollment decline this fall. But with AI programs ballooning, it’s looking less like a tech exodus and more like a migration. The University of Southern California is launching an AI degree this coming fall; so are Columbia University, Pace University, and New Mexico State University, among many others. Students aren’t abandoning tech; they’re choosing programs focused on AI instead to land work. It’s too soon to say whether this recalibration is permanent or a temporary panic or a near-term solution to a longer-term challenge. But it’s certainly a wake-up call for administrators who’ve spent years wrestling with how to handle AI in the classroom. The debate over whether to ban ChatGPT is ancient history at this point. The question now is whether American universities can move fast enough or whether they’ll keep arguing about what to do while students transfer to schools that already have answers. Topics Editor in Chief & General Manager Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Compact_of_Free_Association] | [TOKENS: 3125]
Contents Compact of Free Association The Compacts of Free Association (COFA) are international agreements establishing and governing the relationships of free association between the United States and the three Pacific Island sovereign states of the Federated States of Micronesia (FSM), the Republic of the Marshall Islands (RMI), and the Republic of Palau. As a result, these countries are sometimes known as the Freely Associated States (FASs). All three agreements next expire in 2043. These countries, together with the Commonwealth of the Northern Mariana Islands, formerly constituted the Trust Territory of the Pacific Islands, a United Nations trusteeship administered by the United States Navy from 1947 to 1951, and by the U.S. Department of the Interior from 1951 to 1986 (to 1994 for Palau). The compacts came into being as an extension of the US–UN territorial trusteeship agreement, which obliged the federal government of the United States "to promote the development of the people of the Trust Territory toward self-government or independence as appropriate to the particular circumstances of the Trust Territory and its peoples and the freely expressed wishes of the peoples concerned." Under the compacts, the U.S. federal government provides guaranteed financial assistance over a 15-year period administered through its Office of Insular Affairs in exchange for full international defense authority and responsibilities. The Compacts of Free Association were initiated by negotiators in 1980, and signed by the parties in the years 1982 and 1983. They were approved by the citizens of the Pacific states in plebiscites held in 1983. Legislation on the compacts was adopted by the U.S. Congress in 1986, and signed into law on November 13, 1986. Associated states Economic provisions Each of the associated states actively participates in all Office of Insular Affairs technical assistance activities. The U.S. gives only these countries access to many U.S. domestic programs, including: disaster response and recovery and hazard mitigation programs under the Federal Emergency Management Agency; some U.S. Department of Education programs, including the Pell Grant; and services provided by the National Weather Service, the United States Postal Service, the Federal Aviation Administration, the Federal Communications Commission, the Federal Deposit Insurance Corporation, and U.S. representation to the International Frequency Registration Board of the International Telecommunication Union. The Compact area, while outside the customs area of the United States, is mainly duty-free for imports. Most citizens of the associated states may live and work in the United States, and most U.S. citizens and their spouses may live and work in the associated states. In 1996, the U.S. Personal Responsibility and Work Opportunity Act removed Medicaid benefits for those from the COFA states living in the US, even after the five-year waiting period that most other resident aliens have. However, in December 2020, Congress restored Medicaid for Compact of Free Association communities. Military provisions The COFA allows the United States to operate armed forces in Compact areas, and to demand land for operating bases, subject to negotiation, and excludes the militaries of other countries without U.S. permission. The U.S. in turn becomes responsible for protecting its affiliate countries, and responsible for administering all international defense treaties and affairs, though it may not declare war on their behalf. It is not allowed to use nuclear, chemical, or biological weapons in Palauan territory. In the territories of the Marshall Islands and the Federated States of Micronesia, it is not allowed to store such weapons, except in times of national emergency, state of war, or when necessary to defend against an actual or impending attack on the U.S., the Marshall Islands, or the Federated States of Micronesia. Citizens of the associated states may serve in America's armed forces, and there is a high level of military enlistment by Compact citizens. For example, in 2008, the Federated States of Micronesia had a higher per-capita enlistment rate than any U.S. state, and had more than five times the national per-capita average of casualties in Iraq and Afghanistan: nine soldiers out of a population of 107,000. 21st-century renewal and updates In 2003, the compacts with the RMI and FSM were renewed for 20 years. These new compacts provided US$3.5 billion in funding for both countries. US$30 million was also be disbursed annually among American Samoa, Guam, Hawaii, and the Northern Mariana Islands in "Compact Impact" funding. This funding helped the governments of these localities cope with the expense of providing services to immigrants from the RMI, FSM, and Palau. The U.S. use of Kwajalein Atoll for missile testing was renewed for the same period. The new compacts also changed certain immigration rules. RMI and FSM citizens traveling to the U.S. are now required to have passports. The US Postal Service was given the option to apply international postage rates for mail between the U.S. and RMI/FSM, phased in over five years. The USPS began implementing the change in January 2006, but decided to resume domestic services and rates in November 2007. The renewed compact, commonly called "Compact II," took effect for the FSM on June 25, 2004, and for RMI on June 30, 2004. The economic provisions of the Compact for Palau, which provided $18 million in annual subsidies and grants, expired on September 30, 2009, and the renewal talk was concluded in late 2010. U.S. financial support for Palau is based on a continuing resolution passed by the U.S. Congress. The Compact Trust Fund set up to replace U.S. financial aid underperformed because of the Great Recession. The military and civil defense provisions remained until 2015. An amended Compact, enacted December 17, 2003, as Public Law 108-188, provided financial assistance to the Marshall Islands and Micronesia through 2023. The Compact of Free Association agreement with the Republic of Palau, enshrined in US Public Law 99-658, was followed by a Compact Review Agreement signed between the U.S. and Palau in 2018, extending certain financial provisions through September 30, 2024. In March 2022, President Joe Biden named Ambassador Joseph Yun as US Special Presidential Envoy for Compact Negotiations to take over negotiation for amendment and continuation of COFA. On October 16, 2023, agreements to renew all three compacts for a period of 20 years were formally signed by representatives of each Freely Associated State (FAS) and the U.S. State Department. Total funding for all three agreements is $7.1B paid over 20 years ($889M to Palau; $3.3B to the FSM; $2.3B to the RMI; and $634M for the U.S. Postal Service to offset continuing domestic rate mail service). Palau Finance Minister Kaleb Udui Jr. and U.S. Ambassador Yun signed Palau's COFA extension on May 22, 2023, with the island government previously requesting to advance their date more in line with the other two countries. On May 23, 2023, FSM negotiator Leo Falcam and a State Department representative signed Micronesia's extension at the U.S. embassy in Pohnpei. Marshall Islands' Minister of Foreign Affairs and Trade Jack Ading, alongside Ambassador Yun, signed the RMI's agreement on October 16, 2023. Approval by each legislature, to include a funding mechanism in Congress, is the final step to bring each agreement into force. Legislation implementing the new agreements was enacted by the U.S. Congress in March 2024. Potential associated states The former government of the United States unincorporated territory of Guam, led by Governor Eddie Calvo, campaigned for a plebiscite on Guam's future political status, with free association following the model of the Marshall Islands, Micronesia, and Palau as one of the possible options. In Puerto Rico, the soberanista movement advocates for the territory to be granted a freely associated status. The 2017 status referendum presented "Independence/Free Association" as an option; if the majority of voters had chosen it, a second round of voting would have been held to choose between free association and full independence. In 2022, the U.S. Congress introduced the Puerto Rico Status Act, which would hold a federally-sponsored referendum on the territory's status, with a free association status expected to be presented as an option. Former U.S. diplomat Richard K. Pruett suggested in 2020 that other possible CFA states could include Kiribati, Nauru and the Philippines. The number of CFA states so far has been limited, because it is reserved for only the closest allies of the U.S. Although the CFA is very expensive, support for the alliances has been popular in the U.S. and considered mutually beneficial, with the small island nations warning the U.S. of dangers in the Pacific regions, such as global warming and the encroaching influences of foreign powers. Greenland has also been listed as a potential CFA state, following the 2024 U.S. presidential election. Rasmus Leander Nielsen of the University of Greenland said that Greenlanders have discussed since the 1980s creating a compact of free association with Denmark after independence, and that some have suggested a COFA with the United States instead. Barry Scott Zellen, a scholar of Arctic strategy at the United States Coast Guard Academy, suggested Greenland could become an organized and unincorporated territory of the United States but with a clear pathway to eventual admission as a constituent state "not unlike that which Alaska followed". According to Zellen, "Greenlandic Inuit, who suffer from a long legacy of neglect and whose colonial experience, despite recent gains in autonomy, has not been entirely positive, may indeed stand to benefit in many ways" from this arrangement. However, the majority of Greenlanders do not want to be part of the United States. In a survey conducted by Verian in Denmark for Berlingske and Sermitsiaq in January 2025, Greenlanders were asked: "Do you want Greenland to leave Denmark and become part of the United States?" The results show that 85% of Greenlanders do not want to leave the Realm and become part of the United States, while 6% want to leave and become part of the U.S., and 9% are undecided. U.S. fulfillment of commitments The United States' administration of the former trust territories now covered under the Compacts of Free Association has been subject to ongoing criticism over the past several decades. A 1961 United Nations mission report initially noted deficiencies in "American administration in almost every area: poor transportation, failure to settle war damage claims; failure to adequately compensate for land taken for military purposes; poor living conditions[;] inadequate economic development; inadequate education programs; and almost nonexistent medical care." In 1971, congresswoman Patsy Mink further noted that "[A]fter winning the right to control Micronesia, [the U.S.] proceeded to allow the islands to stagnate and decay through indifference and lack of assistance. . . . [T]he people are still largely impoverished and lacking in all of the basic amenities which we consider essential – adequate education, housing, good health standards, modern sanitation facilities." After the compacts, criticism was also received by the United States House Foreign Affairs Subcommittee on Asia and the Pacific regarding the unfulfilled commitments of the United States to address the impacts of U.S. nuclear testing in the Marshall Islands, which were included as part of the Pacific Proving Grounds. Speakers noted that while section 177 of the Compact of Free Association recognized the United States' responsibility "to address past, present and future consequences of the nuclear testing claims," less than $4 million was awarded out of a $2.2 billion judgment rendered by a Nuclear Claims Tribunal created under the RMI Compact, and the United States Court of Claims had dismissed two lawsuits to enforce the judgment. With respect to these unaddressed claims, medical practitioners also noted the potential widespread impacts of nuclear testing within the Pacific Proving Grounds, indicated by the prevalence of both radiogenic diseases, as well as heart disease, diabetes, and obesity associated with "[a] forced change in dietary patterns and lifestyle" resulting from U.S. administration after the testing. In 2011, lawmakers further noted that the U.S. Congress had continuously failed to cover the costs of promised medical care and services to displaced compact citizens who migrate to the United States for health care, education, and employment opportunities, particularly since the passage of the Personal Responsibility and Work Opportunity Act. Questions regarding U.S. responsibility have also been raised regarding the issue of numerous derelict war ships and oil tankers abandoned or destroyed by the U.S. military in atolls and islands throughout the compact area. Healthcare issues In 2009, the state of Hawaii, under the administration of Governor Linda Lingle, attempted to restrict health care access for Compact citizens by eliminating all Compact residents of Hawaii from Med-QUEST, the state's comprehensive Medicaid coverage plan. COFA residents were instead subject to Basic Health Hawaii, a limited health care plan under which "transportation services are excluded and patients can receive no more than ten days of medically necessary inpatient hospital care per year, twelve outpatient visits per year, and a maximum of four medication prescriptions per calendar month. . . . BHH covers dialysis treatments as an emergency medical service only, and the approximate ten to twelve prescription medications dialysis patients take per month are not fully covered. BHH . . . caus[es] cancer patients to exhaust their allotted doctors' visits within two to three months". Noting that such a policy likely constituted unlawful discrimination in violation of the Equal Protection Clause, federal District Court Judge John Michael Seabright issued a preliminary injunction against the implementation of Basic Health Hawaii. In finding a high likelihood of irreparable harm, Judge Seabright took note of the "compelling evidence that BHH's limited coverage ... is causing COFA Residents to forego much needed treatment because they cannot otherwise afford it". Lingle's successor, Governor Neil Abercrombie continued the state's appeal of the injunction to the United States Court of Appeals for the Ninth Circuit, which ruled in favor of the state. When the United States Supreme Court refused to hear the case, the Abercrombie administration removed most COFA residents from Med-QUEST and transferred them onto Affordable Care Act plans. In other states, notably Arkansas, which has a significant population of Marshallese, COFA residents have not been eligible for Medicaid. In 2020, the United States Congress restored Medicaid eligibility for COFA residents with the Consolidated Appropriations Act. In 2024, access to COFA members to many federal programs was restored, such as Supplemental Nutrition Assistance Program, Temporary Assistance for Needy Families, and Supplemental Security Income after it had been dropped in the 1990s after a welfare reform bill. Medicaid had already been restored in 2020. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Talk:Middle_East#Dubious] | [TOKENS: 1619]
Contents Talk:Middle East This article is related to the Arab–Israeli conflict, which is subject to the extended-confirmed restriction. You are not an extended-confirmed user, so you must not edit or discuss this topic anywhere on Wikipedia except to make an edit request. (Additional details are in the message box just below this one.) Warning: active arbitration remedies The following restrictions apply to everyone editing this article: Editors who repeatedly or seriously fail to adhere to the purpose of Wikipedia, any expected standards of behaviour, or any normal editorial process may be blocked or restricted by an administrator. This page is subject to the extended confirmed restriction related to the Arab-Israeli conflict. The article Middle East, along with other pages relating to the Syrian Civil War and ISIL, is designated by the community as a contentious topic. The current restrictions are: Editors who repeatedly or seriously fail to adhere to the purpose of Wikipedia, any expected standards of behaviour, or any normal editorial process may be sanctioned. Enforcement procedures: With respect to any reverting restrictions: If you are unsure if your edit is appropriate, discuss it here on this talk page first. Remember: When in doubt, don't revert! Extended-confirmed-protected edit request on 11 March 2025 I In section 2nd paragraph of Middle East#Translations, please add the closing parenthesis after: (terms meaning Near East. 78.9.227.98 (talk) 17:53, 11 March 2025 (UTC)[reply] Extended-confirmed-protected edit request on 11 March 2025 (typo fix request) In section 2nd paragraph of Middle East#Translations, please add the closing parenthesis after: (terms meaning Near East. 78.9.227.98 (talk) 17:53, 11 March 2025 (UTC)[reply] Interactive Table does not work properly when ranking by Nominal GDP The title says it all. 218.250.192.180 (talk) 11:40, 13 May 2025 (UTC)[reply] Extended-confirmed-protected edit request on 24 May 2025 The citation for the following sentence With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions of the Middle East. In recent years, however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies.[citation needed] is https://www3.weforum.org/docs/WEF_Travel_and_Tourism_Development_Index_2024.pdf Jetsetter84 (talk) 09:28, 24 May 2025 (UTC)[reply] Proposal for an edit to Iran's entry in the "Countries and territory" section While Islam is Iran's official religion (Shia Islam specifically), under the Iranian constitution Sunni Islam, Judaism, Christianity and Zoroastrianism are all officially recognised and protected minority religions. Might be more accurate to add a "Recognised minority religions section to Iran's entry". Rustttic (talk) 01:52, 1 June 2025 (UTC)[reply] New Map I propose to add a new map that indicates all countries, regions, and territories that have been historically included in the Middle East since the beginning of its usage. I think it will be educational to do so to show readers that the definition, countries, territories etc. has never been stagnant. I can provide multiple links as reference if any moderators accept this proposal. Thepeacefulstaff (talk) 16:17, 8 July 2025 (UTC)[reply] Extended-confirmed-protected edit request on 11 July 2025 The emblem of Qatar has updated since 2022. The old one can still be seen on the page. Checkit0172 (talk) 10:25, 11 July 2025 (UTC)[reply] ethnic groups Azeris, Nubians, Iraqi Turkmen, Yazidis and Greek Cypriots are not in the cited source (). Wikiuser552 (talk) 00:37, 17 July 2025 (UTC)[reply] References Extended-confirmed-protected edit request on 8 August 2025 Israel is part of the middle east. https://www.britannica.com/place/Middle-East 107.135.135.209 (talk) 17:05, 8 August 2025 (UTC)[reply] Typo “Most Middle Eastern countries (13 out of 18) are part of the Arab world.” There are 17 Middle Eastern countries, not 18. Change “(13 out of 18)” to “(13 out of 17)”. Thepeacefulstaff (talk) 16:58, 13 August 2025 (UTC)[reply] Suggestion Beneficial to add, to some degree, in the lede (where most viewers only read). “The following presentation uses maps to illustrate the lack of consensus among governments, international organizations, and scholars regarding how to define the Middle East or even whether to use that term. The instability of the concept “Middle East” points to the need to break down traditional area studies barriers.” Images: https://mideast.unc.edu/where/ “There are several common conceptions of which countries the term Middle East encompasses. Virtually every use of the term includes: The Arabian Peninsula (Saudi Arabia, Kuwait, Yemen, Oman, Bahrain, Qatar, and the United Arab Emirates) The Levant (Syria, Lebanon, Israel, Jordan, the West Bank, and the Gaza Strip) and Iraq. Iran, Turkey (Türkiye), and Egypt are typically, but not always, included. Cyprus, which has a strong historical connection with the coastal areas of the eastern Mediterranean Sea, is sometimes considered part of the Middle East.” https://www.britannica.com/place/Middle-East Perhaps Cyprus, Egypt, Iran, and Turkey can be shaded a lighter green on the map. Thepeacefulstaff (talk) 17:22, 13 August 2025 (UTC)[reply] Edit request 27 October 2025 Description of suggested change: Please add this Diff: 2405:6E00:623:F942:6008:2983:CC46:595E (talk) 03:46, 27 October 2025 (UTC)[reply] Please add this because Politics of the Middle East goes to this page. Diff: 2405:6E00:623:877F:9341:F5FA:4867:115E (talk) 11:14, 1 November 2025 (UTC)[reply] The Nominal GDP list can't sort correctly The Nominal GDP list uses "," instead of "." for Saudi Arabia and Turkey. This means that if you sort by nominal GDP, Saudi Arabia and Turkey are at the bottom while they should be at the top. Looking at the page: GDP (nominal) https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal) all countries use a "," I suggest replacing the "." in the GDP list with a "," to keep it in line with other pages, and fix the Nominal GDP list. Aqmery (talk) 15:26, 4 November 2025 (UTC)[reply] External links Will the following link be accepted: Gulf/2000 Project? תיל"ם (talk) 06:34, 4 December 2025 (UTC)[reply] Bias Caucasians and Cypriots aren't even mentioned in the same level of the Arab countries. Biased Mynameiscandle (talk) 06:01, 14 February 2026 (UTC)[reply]
========================================
[SOURCE: https://en.wikipedia.org/wiki/Sociology_of_culture] | [TOKENS: 3396]
Contents Sociology of culture 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias The sociology of culture, and the related field of cultural sociology, concerns the systematic analysis of culture, usually understood as the ensemble of symbolic codes used by members of a society, as they are expressed within the context of that society. According to Georg Simmel, culture referred to "the cultivation of individuals through the agency of external forms which have been objectified in the course of history". In the sociological field, culture is defined as the ways in which individuals think, communicate, and behave, as well as the tangible artifacts that collectively influence a community's way of life. Contemporary sociologists' approach to culture is often divided between a "sociology of culture" and "cultural sociology". The terms are similar, though not interchangeable. The sociology of culture is an older concept, and it considers some topics and objects as more or less "cultural" than others. In contrast, Jeffrey C. Alexander introduced the term cultural sociology, an approach that sees all, or most, social phenomena as inherently cultural at some level. For instance, a leading proponent of the "strong program" in cultural sociology, Alexander argues: "To believe in the possibility of cultural sociology is to subscribe to the idea that every action, no matter how instrumental, reflexive, or coerced [compared to] its external environment, is embedded to some extent in a horizon of affect and meaning." In terms of analysis, sociology of culture often attempts to explain some discretely cultural phenomena as a product of social processes, while cultural sociology sees culture as a component of explanations of social phenomena. As opposed to the field of cultural studies, cultural sociology does not reduce all human matters to a problem of cultural encoding and decoding. For instance, Pierre Bourdieu's cultural sociology has a "clear recognition of the social and the economic as categories which are interlinked with, but not reducible to, the cultural." Development Cultural sociology first emerged in Weimar, Germany, where sociologists such as Alfred Weber used the term Kultursoziologie (cultural sociology). Cultural sociology was then "reinvented" in the English-speaking world as a product of the "cultural turn" of the 1960s, which ushered in structuralist and postmodern approaches to social science. This type of cultural sociology may loosely be regarded as an approach incorporating cultural analysis and critical theory. In the beginning of the cultural turn, sociologists tended to use qualitative methods and hermeneutic approaches to research, focusing on meanings, words, artifacts and symbols. "Culture" has since become an important concept across many branches of sociology, including historically quantitative and model-based subfields, such as social stratification and social network analysis. The sociology of culture grew from the intersection between sociology, as shaped by early theorists like Marx, Durkheim, and Weber, and anthropology where researchers pioneered ethnographic strategies for describing and analyzing a variety of cultures around the world. Part of the legacy of the early development of the field is still felt in the methods (much of cultural sociological research is qualitative) in the theories (a variety of critical approaches to sociology are central to current research communities) and substantive focus of the field. For instance, relationships between popular culture, political control, and social class were early and lasting concerns in the field. As a major contributor to conflict theory, Marx argued that culture served to justify inequality. The ruling class, or the bourgeoisie, produce a culture that promotes their interests, while repressing the interests of the proletariat. His most famous line to this effect is that "Religion is the opium of the people". Marx believed that the "engine of history" was the struggle between groups of people with diverging economic interests and thus the economy determined the cultural superstructure of values and ideologies. For this reason, Marx is a considered a materialist as he believes that the economic (material) produces the cultural (ideal), which "stands Hegel on his head," who argued the ideal produced the material. Durkheim held the belief that culture has many relationships to society which include: Weber innovated the idea of a status group as a certain type of subculture. Status groups are based on things such as: race, ethnicity, religion, region, occupation, gender, sexual preference, etc. These groups live a certain lifestyle based on different values and norms. They are a culture within a culture, hence the label subculture. Weber also purported the idea that people were motivated by their material and ideal interests, which include things such as preventing one from going to hell. Weber also explains that people use symbols to express their spirituality, that symbols are used to express the spiritual side of real events, and that ideal interests are derived from symbols. For Simmel, culture refers to "the cultivation of individuals through the agency of external forms which have been objectified in the course of history." Simmel presented his analyses within a context of "form" and "content". Sociological concept and analysis can be viewed. The elements of a culture As no two cultures are exactly alike they do all have common characteristics. A culture contains: 1. Social Organization: Structured by organizing its members into smaller numbers to meet the culture's specific requirements. Social classes ranked in order of importance (status) based on the culture's core values. For example: money, job, education, family, etc. 2. Customs and Traditions: Rules of behavior enforced by the cultures ideas of right and wrong such as customs, traditions, rules, or written laws. 3. Symbols: Anything that carries a particular meaning recognized by people who share the same culture. 4. Norms: Rules and expectations by which a society guides the behavior of its members. The two types of norms are mores and folkways. Mores are norms that are widely observed and have a great moral significance. Folkways are norms for routine, casual interaction. 5. Religion: The answers to their basic meanings of life and values. 6. Language: A system of symbols that allows people to communicate with one another. 7. Arts and Literature: Products of human imagination expressed through art, music, literature, stories, and dance. 8. Forms of Government: How the culture distributes power. Who keeps order within the society, who protects them from danger, and who provides for their needs. Can fall into categories such as Democracy, Republic, or Dictatorship. 9. Economic Systems: What to produce, how to produce it, and for whom. How people use their limited resources to satisfy their wants and needs. Can fall into the categories such as Traditional Economy, Market Economy, Command Economy, Mixed Economy. 10. Artifacts: Distinct material objects, such as architecture, technologies, and artistic creations. 11. Social institutions: Patterns of organization and relationships regarding governance, production, socializing, education, knowledge creation, arts, and relating to other cultures. Anthropology In an anthropological sense, culture is society based on the values and ideas without influence of the material world. The cultural system is the cognitive and symbolic matrix for the central values system — Talcott Parsons Culture is like the shell of a lobster. Human nature is the organism living inside of that shell. The shell, culture, identifies the organism, or human nature. Culture is what sets human nature apart, and helps direct the life of human nature. Anthropologists lay claim to the establishment of modern uses of the culture concept as defined by Edward Burnett Tylor in the mid-19th century. Malinowski collected data from the Trobriand Islands. Descent groups across the island claim parts of the land, and to back up those claims, they tell myths of how an ancestress started a clan and how the clan descends from that ancestress. Malinowski's observations followed the research of that found by Durkheim. Radcliffe-Brown put himself in the culture of the Andaman Islanders. His research showed that group solidification among the islanders is based on music and kinship, and the rituals that involve the use of those activities. In the words of Radcliffe-Brown, "Ritual fortifies Society". Marcel Mauss made many comparative studies on religion, magic, law and morality of occidental and non-occidental societies, and developed the concept of total social fact, and argued that the reciprocity is the universal logic of the cultural interaction. Lévi-Strauss, based, at the same time, on the sociological and anthropological positivism of Durkheim, Mauss, Malinowski and Radcliffe-Brown, on the economic and sociological Marxism, on Freudian and Gestalt psychology and on structural linguistics of Saussure and Jakobson, realized great studies on areas myth, kinship, religion, ritual, symbolism, magic, ideology (sauvage pensée), knowledge, art and aesthetics, applying the methodological structuralism on his investigations. He searched the universal principals of human thought as a form of explaining social behaviors and structures. Mary Douglas is widely known for her contributions to social anthropology, particularly in her analysis of ritual purity, pollution, and the symbolic structures that shape cultural classifications of dirt and disorder. She examined how societies construct and maintain order through these classifications. Her research also focused on the concepts of risk, blame, and misfortune in both traditional and modern contexts, arguing that notions such as witchcraft, sin, and risk serve similar functions in assigning responsibility and preserving social order. Additionally, she developed a theoretical model for understanding how social structures and individual roles influence cultural beliefs and practices. Major areas of research French sociologist Pierre Bourdieu's influential model of society and social relations has its roots in Marxist theories of class and conflict. Bourdieu characterizes social relations in the context of what he calls the field, defined as a competitive system of social relations functioning according to its own specific logic or rules. The field is the site of struggle for power between the dominant and subordinate classes. It is within the field that legitimacy—a key aspect defining the dominant class—is conferred or withdrawn. Bourdieu's theory of practice is practical rather than discursive, embodied as well as cognitive and durable though adaptive. A valid concern that sets the agenda in Bourdieu's theory of practice is how action follows regular statistical patterns without the product of accordance to rules, norms and/or conscious intention. To explain this concern, Bourdieu explains habitus and field. Habitus explains the mutually penetrating realities of individual subjectivity and societal objectivity after the function of social construction. It is employed to transcend the subjective and objective dichotomy. The belief that culture is symbolically coded and can thus be taught from one person to another means that cultures, although bounded, can change. Cultures are both predisposed to change and resistant to it. Resistance can come from habit, religion, and the integration and interdependence of cultural traits. Cultural change can have many causes, including: the environment, inventions, and contact with other cultures. Several understandings of how cultures change come from anthropology. For instance, in diffusion theory, the form of something moves from one culture to another, but not its meaning. For example, the ankh symbol originated in Egyptian culture but has diffused to numerous cultures. Its original meaning may have been lost, but it is now used by many practitioners of New Age religion as an arcane symbol of power or life forces. A variant of the diffusion theory, stimulus diffusion, refers to an element of one culture leading to an invention in another. Contact between cultures can also result in acculturation. Acculturation has different meanings, but in this context refers to replacement of the traits of one culture with those of another, such as what happened with many Native American Indians. Related processes on an individual level are assimilation and transculturation, both of which refer to adoption of a different culture by an individual. Wendy Griswold outlined another sociological approach to cultural change. Griswold points out that it may seem as though culture comes from individuals—which, for certain elements of cultural change, is true—but there is also the larger, collective, and long-lasting culture that cannot have been the creation of single individuals as it predates and post-dates individual humans and contributors to culture. The author presents a sociological perspective to address this conflict. Sociology suggests an alternative to both the view that it has always been an unsatisfying way at one extreme and the sociological individual genius view at the other. This alternative posits that culture and cultural works are collective, not individual, creations. We can best understand specific cultural objects... by seeing them not as unique to their creators but as the fruits of collective production, fundamentally social in their genesis. (p. 53) In short, Griswold argues that culture changes through the contextually dependent and socially situated actions of individuals; macro-level culture influences the individual who, in turn, can influence that same culture. The logic is a bit circular, but illustrates how culture can change over time yet remain somewhat constant. It is, of course, important to recognize here that Griswold is talking about cultural change and not the actual origins of culture (as in, "there was no culture and then, suddenly, there was"). Because Griswold does not explicitly distinguish between the origins of cultural change and the origins of culture, it may appear as though Griswold is arguing here for the origins of culture and situating these origins in society. This is neither accurate nor a clear representation of sociological thought on this issue. Culture, just like society, has existed since the beginning of humanity (humans being social and cultural). Society and culture co-exist because humans have social relations and meanings tied to those relations (e.g. brother, lover, friend). Culture as a super-phenomenon has no real beginning except in the sense that humans (homo sapiens) have a beginning. This, then, makes the question of the origins of culture moot—it has existed as long as we have, and will likely exist as long as we do. Cultural change, on the other hand, is a matter that can be questioned and researched, as Griswold does. Culture theory, developed in the 1980s and 1990s, sees audiences as playing an active rather than passive role in relation to mass media. One strand of research focuses on the audiences and how they interact with media; the other strand of research focuses on those who produce the media, particularly the news. Frankfurt School Current research Computer-mediated communication (CMC) is the process of sending messages—primarily, but not limited to text messages—through the direct use by participants of computers and communication networks. By restricting the definition to the direct use of computers in the communication process, you have to get rid of the communication technologies that rely upon computers for switching technology (such as telephony or compressed video), but do not require the users to interact directly with the computer system via a keyboard or similar computer interface. To be mediated by computers in the sense of this project, the communication must be done by participants fully aware of their interaction with the computer technology in the process of creating and delivering messages. Given the current state of computer communications and networks, this limits CMC to primarily text-based messaging, while leaving the possibility of incorporating sound, graphics, and video images as the technology becomes more sophisticated. Cultural activities are institutionalised; the focus on institutional settings leads to the investigation "of activities in the cultural sector, conceived as historically evolved societal forms of organising the conception, production, distribution, propagation, interpretation, reception, conservation and maintenance of specific cultural goods". Cultural Institutions Studies is therefore a specific approach within the sociology of culture. Key figures Key figures in today's cultural sociology include: Julia Adams, Jeffrey Alexander, John Carroll, Diane Crane, Paul DiMaggio, Henning Eichberg, Ron Eyerman, Sarah Gatson, Andreas Glaeser, Wendy Griswold, Eva Illouz, Karin Knorr-Cetina, Michele Lamont, Annette Lareau, Stjepan Mestrovic, Philip Smith, Margaret Somers, Yasemin Soysal, Dan Sperber, Lynette Spillman, Ann Swidler, Diane Vaughan, and Viviana Zelizer. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/C_(programming_language)] | [TOKENS: 8771]
Contents C (programming language) Page version status This is an accepted version of this page C[c] is a general-purpose programming language created in the 1970s by Dennis Ritchie. By design, C gives the programmer relatively direct access to the features of the typical CPU architecture, customized for the target instruction set. It has been and continues to be used to implement operating systems (especially kernels), device drivers, and protocol stacks, but its use in application software has been decreasing. C is used on computers that range from the largest supercomputers to the smallest microcontrollers and embedded systems. A successor to the programming language B, C was originally developed at Bell Labs by Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system. During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages, with C compilers available for practically all modern computer architectures and operating systems. The book The C Programming Language, co-authored by the original language designer, served for many years as the de facto standard for the language. C has been standardized since 1989 by the American National Standards Institute (ANSI) and, subsequently, jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). C is an imperative procedural language, supporting structured programming, lexical variable scope, and recursion, with a static type system. It was designed to be compiled to provide low-level access to memory and language constructs that map efficiently to machine instructions, all with minimal runtime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant C program written with portability in mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code. Although neither C nor its standard library provide some popular features found in other languages, it is flexible enough to support them. For example, object orientation and garbage collection are provided by external libraries GLib Object System and Boehm garbage collector, respectively. Since 2000, C has typically ranked as the most or second-most popular language in the TIOBE index. Characteristics The C language exhibits the following characteristics: "Hello, world" example The "Hello, World!" program example that appeared in the first edition of K&R has become the model for an introductory program in most programming textbooks. The program prints "hello, world" to the standard output. The original version was: A more modern version is:[d] The first line is a preprocessor directive, indicated by #include, which causes the preprocessor to replace that line of code with the text of the stdio.h header file, which contains declarations for input and output functions including printf. The angle brackets around stdio.h indicate that the header file can be located using a search strategy that selects header files provided with the compiler over files with the same name that may be found in project-specific directories. The next code line declares the entry point function main. The run-time environment calls this function to begin program execution. The type specifier int indicates that the function returns an integer value. The void parameter list indicates that the function consumes no arguments. The run-time environment actually passes two arguments (typed int and char *[]), but this implementation ignores them. The ISO C standard (section 5.1.2.2.1) requires syntax that either is void or these two arguments – a special treatment not afforded to other functions. The opening curly brace indicates the beginning of the code that defines the function. The next line of code calls (diverts execution to) the C standard library function printf with the address of the first character of a null-terminated string specified as a string literal. The text \n is an escape sequence that denotes the newline character which when output in a terminal results in moving the cursor to the beginning of the next line. Even though printf returns an int value, it is silently discarded. The semicolon ; terminates the call statement. The closing curly brace indicates the end of the main function. Prior to C99, an explicit return 0; statement was required at the end of main function, but since C99, the main function (as being the initial function call) implicitly returns 0 upon reaching its final closing curly brace.[e] History The origin of C is closely tied to the development of the Unix operating system, originally implemented in assembly language on a PDP-7 by Dennis Ritchie and Ken Thompson, incorporating several ideas from colleagues. Eventually, they decided to port the operating system to a PDP-11. The original PDP-11 version of Unix was also developed in assembly language. Thompson wanted a programming language for developing utilities for the new platform. He first tried writing a Fortran compiler, but he soon gave up the idea and instead created a cut-down version of the recently developed systems programming language called BCPL. The official description of BCPL was not available at the time, and Thompson modified the syntax to be less 'wordy' and similar to a simplified ALGOL known as SMALGOL. He called the result B, describing it as "BCPL semantics with a lot of SMALGOL syntax". Like BCPL, B had a bootstrapping compiler to facilitate porting to new machines. Ultimately, few utilities were written in B because it was too slow and could not take advantage of PDP-11 features such as byte addressability. Unlike BCPL's // comment marking comments up to the end of the line, B adopted /* comment */ as the comment delimiter, more akin to PL/1, and allowing comments to appear in the middle of lines. (BCPL's comment style would be reintroduced in C++.) In 1971 Ritchie started to improve B, to use the features of the more-powerful PDP-11. A significant addition was a character data type. He called this New B (NB). Thompson started to use NB to write the Unix kernel, and his requirements shaped the direction of the language development. Through to 1972, richer types were added to the NB language. NB had arrays of int and char, and to these types were added pointers, the ability to generate pointers to other types, arrays of all types, and types to be returned from functions. Arrays within expressions were effectively treated as pointers. A new compiler was written, and the language was renamed C. The C compiler and some utilities made with it were included in Version 2 Unix, which is also known as Research Unix. At Version 4 Unix, released in November 1973, the Unix kernel was extensively re-implemented in C. By this time, the C language had acquired some powerful features such as struct types. The preprocessor was introduced around 1973 at the urging of Alan Snyder and also in recognition of the usefulness of the file-inclusion mechanisms available in BCPL and PL/I. Its original version provided only included files and simple string replacements: #include and #define of parameterless macros. Soon after that, it was extended, mostly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation. Unix was one of the first operating system kernels implemented in a language other than assembly. Earlier instances include the Multics system (which was written in PL/I) and Master Control Program (MCP) for the Burroughs B5000 (which was written in ALGOL) in 1961. In and around 1977, Ritchie and Stephen C. Johnson made further changes to the language to facilitate portability of the Unix operating system. Johnson's Portable C Compiler served as the basis for several implementations of C on new platforms. In 1978 Brian Kernighan and Dennis Ritchie published the first edition of The C Programming Language. Known as K&R from the initials of its authors, the book served for many years as an informal specification of the language. The version of C that it describes is commonly referred to as "K&R C". As this was released in 1978, it is now also referred to as C78. The second edition of the book covers the later ANSI C standard, described below. K&R introduced several language features: Even after the publication of the 1989 ANSI standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well. Although later versions of C require functions to have an explicit type declaration, K&R C only requires functions that return a type other than int to be declared before use. Functions used without prior declaration were presumed to return int. For example: The declaration of long_function() (on line 1) is required since it returns long; not int. Function int_function can be called (line 11) even though it is not declared since it returns int. Also, variable intvar does not need to be declared as type int since that is the default type for register keyword. Since function declarations did not include information about arguments, type checks were not performed, although some compilers would issue a warning if different calls to a function used different numbers or types of arguments. Tools such as Unix's lint utility were developed that (among other things) checked for consistency of function use across multiple source files. In the years following the publication of K&R C, several features were added to the language, supported by compilers from AT&T (in particular PCC) and other vendors. These included: The popularity of the language, lack of agreement on standard library interfaces, and lack of compliance to the K&R specification, led to standardization efforts. During the late 1970s and 1980s, versions of C were implemented for a wide variety of mainframe computers, minicomputers, and microcomputers, including the IBM PC, as its popularity increased significantly. In 1983 the American National Standards Institute (ANSI) formed a committee, X3J11, to establish a standard specification of C. X3J11 based the C standard on the Unix implementation; however, the non-portable portion of the Unix C library was handed off to the IEEE working group 1003 to become the basis for the 1988 POSIX standard. In 1989, the C standard was ratified as ANSI X3.159-1989 "Programming Language C". This version of the language is often referred to as ANSI C, Standard C, or sometimes C89. In 1990 the ANSI C standard (with formatting changes) was adopted by the International Organization for Standardization (ISO) as ISO/IEC 9899:1990, which is sometimes called C90. Therefore, the terms "C89" and "C90" refer to the same programming language. ANSI, like other national standards bodies, no longer develops the C standard independently, but defers to the international C standard, maintained by the working group ISO/IEC JTC1/SC22/WG14. National adoption of an update to the international standard typically occurs within a year of ISO publication. One of the aims of the C standardization process was to produce a superset of K&R C, incorporating many of the subsequently introduced unofficial features. The standards committee also included several additional features such as function prototypes (borrowed from C++), void pointers, support for international character sets and locales, and preprocessor enhancements. Although the syntax for parameter declarations was augmented to include the style used in C++, the K&R interface continued to be permitted, for compatibility with existing source code. C89 is supported by current C compilers, and most modern C code is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on any platform with a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byte endianness. In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the __STDC__ macro can be used to split the code into Standard and K&R sections to prevent the use on a K&R C-based compiler of features available only in Standard C. After the ANSI/ISO standardization process, the C language specification remained relatively static for several years. In 1995, Normative Amendment 1 to the 1990 C standard (ISO/IEC 9899/AMD1:1995, known informally as C95) was published, to correct some details and to add more extensive support for international character sets. The C standard was further revised in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which is commonly referred to as "C99". It has since been amended three times by Technical Corrigenda. C99 introduced several new features, including inline functions, several new data types (including long long int and a complex type to represent complex numbers), variable-length arrays and flexible array members, improved support for IEEE 754 floating point, support for variadic macros (macros of variable arity), and support for one-line comments beginning with //, as in BCPL or C++. Many of these had already been implemented as extensions in several C compilers. C99 is for the most part backward compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer has int implicitly assumed. A standard macro __STDC_VERSION__ is defined with value 199901L to indicate that C99 support is available. GCC, Solaris Studio, and other C compilers now[when?] support many or all of the new features of C99. The C compiler in Microsoft Visual C++, however, implements the C89 standard and those parts of C99 that are required for compatibility with C++11.[needs update] In addition, the C99 standard requires support for identifiers using Unicode in the form of escaped characters (e.g. \u0040 or \U0001f431) and suggests support for raw Unicode names. Work began in 2007 on another revision of the C standard, informally called "C1X" until its official publication of ISO/IEC 9899:2011 on December 8, 2011. The C standards committee adopted guidelines to limit the adoption of new features that had not been tested by existing implementations. The C11 standard adds numerous new features to C and the library, including type generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions. It also makes some portions of the existing C99 library optional, and improves compatibility with C++. The standard macro __STDC_VERSION__ is defined as 201112L to indicate that C11 support is available. C17 is an informal name for ISO/IEC 9899:2018, a standard for the C programming language published in June 2018. It introduces no new language features, only technical corrections, and clarifications to defects in C11. The standard macro __STDC_VERSION__ is defined as 201710L to indicate that C17 support is available. C23 is an informal name for the current major C language standard revision. It was known as "C2X" through most of its development. It builds on past releases, introducing features like new keywords, additional meaning for auto to provide type inference when declaring variables, new types including nullptr_t and _BitInt(N), and expansions to the standard library. C23 was published in October 2024 as ISO/IEC 9899:2024. The standard macro __STDC_VERSION__ is defined as 202311L to indicate that C23 support is available. C2Y is an informal name for the next major C language standard revision, after C23 (C2X), that is hoped to be released later in the 2020s, hence the '2' in "C2Y". An early working draft of C2Y was released in February 2024 as N3220 by the working group ISO/IEC JTC1/SC22/WG14. Historically, embedded C programming requires non-standard extensions to the C language to support exotic features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations. In 2008, the C Standards Committee published a technical report extending the C language to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I/O hardware addressing. Definition C has a formal grammar specified by the C standard. Line endings are generally not significant in C; however, line boundaries do have significance during the preprocessing phase. Comments may appear either between the delimiters /* and */, or (since C99) following // until the end of the line. Comments delimited by /* and */ do not nest, and these sequences of characters are not interpreted as comment delimiters if they appear inside string or character literals. C source files contain declarations and function definitions. Function definitions, in turn, contain declarations and statements. Declarations either define new types using keywords such as struct, union, and enum, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such as char and int specify built-in types. Sections of code are enclosed in braces ({ and }, sometimes called "curly brackets") to limit the scope of declarations and to act as a single statement for control structures. As an imperative language, C uses statements to specify actions. The most common statement is an expression statement, consisting of an expression to be evaluated, followed by a semicolon; as a side effect of the evaluation, functions may be called and variables assigned new values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords. Structured programming is supported by if ... [else] conditional execution and by do ... while, while, and for iterative execution (looping). The for statement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted. break and continue can be used within the loop. Break is used to leave the innermost enclosing loop statement and continue is used to skip to its reinitialisation. There is also a non-structured goto statement, which branches directly to the designated label within the function. switch selects a case to be executed based on the value of an integer expression. Different from many other languages, control-flow will fall through to the next case unless terminated by a break. Expressions can use a variety of built-in operators and may contain function calls. The order in which arguments to functions and operands to most operators are evaluated is unspecified. The evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next "sequence point"; sequence points include the end of each expression statement, and the entry to and return from each function call. Sequence points also occur during evaluation of expressions containing certain operators (&&, ||, ?: and the comma operator). This permits a high degree of object code optimization by the compiler, but requires C programmers to take more care to obtain reliable results than is needed for other programming languages. Kernighan and Ritchie say in the Introduction of The C Programming Language: "C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better." The C standard did not attempt to correct many of these blemishes, because of the impact of such changes on already existing software. The basic C source character set includes the following characters: The newline character indicates the end of a text line; it need not correspond to an actual single character, although for convenience C treats it as such. The POSIX standard mandates a portable character set which adds a few characters (notably "@") to the basic C source character set. Both standards do not prescribe any particular value encoding—ASCII and EBCDIC both comply with these standards, since they include at least those basic characters, even though they use different encoded values for those characters. Additional multi-byte encoded characters may be used in string literals, but they are not entirely portable. Since C99 multi-national Unicode characters can be embedded portably within C source text by using \uXXXX or \UXXXXXXXX encoding (where X denotes a hexadecimal character). The basic C execution character set contains the same characters, along with representations for the null character, alert, backspace, and carriage return. Run-time support for extended character sets has increased with each revision of the C standard. All versions of C have reserved words that are case sensitive. As reserved words, they cannot be used for variable names. C89 has 32 reserved words: C99 added five more reserved words: (‡ indicates an alternative spelling alias for a C23 keyword) C11 added seven more reserved words: (‡ indicates an alternative spelling alias for a C23 keyword) C23 reserved fifteen more words: Most of the recently reserved words begin with an underscore followed by a capital letter, because identifiers of that form were previously reserved by the C standard for use only by implementations. Since existing program source code should not have been using these identifiers, it would not be affected when C implementations started supporting these extensions to the programming language. Some standard headers do define more convenient synonyms for underscored identifiers. Some of those words were added as keywords with their conventional spelling in C23 and the corresponding macros were removed. Prior to C89, entry was reserved as a keyword. In the second edition of their book The C Programming Language, which describes what became known as C89, Kernighan and Ritchie wrote, "The ... [keyword] entry, formerly reserved but never used, is no longer reserved." and "The stillborn entry keyword is withdrawn." C supports a rich set of operators, which are symbols used within an expression to specify the manipulations to be performed while evaluating that expression. C has operators for: C uses the operator = (used in mathematics to express equality) to indicate assignment, following the precedent of Fortran and PL/I, but unlike ALGOL and its derivatives. C uses the operator == to test for equality. The similarity between the operators for assignment and equality may result in the accidental use of one in place of the other, and in many cases the mistake does not produce an error message (although some compilers produce warnings). For example, the conditional expression if (a == b + 1) might mistakenly be written as if (a = b + 1), which will be evaluated as true unless the value of a is 0 after the assignment. The C operator precedence is not always intuitive. For example, the operator == binds more tightly than (is executed prior to) the operators & (bitwise AND) and | (bitwise OR) in expressions such as x & 1 == 0, which must be written as (x & 1) == 0 if that is the coder's intent. The type system in C is static and weakly typed, which makes it similar to the type system of ALGOL descendants such as Pascal. There are built-in types for integers of various sizes, both signed and unsigned, floating-point numbers, and enumerated types (enum). Integer type char is often used for single-byte characters. C99 added a Boolean data type. There are also derived types including arrays, pointers, records (struct), and unions (union). C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using a type cast to explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a data object in some other way. Some find C's declaration syntax unintuitive, particularly for function pointers. (Ritchie's idea was to declare identifiers in contexts resembling their use: "declaration reflects use".) C's usual arithmetic conversions allow for efficient code to be generated, but can sometimes produce unexpected results. For example, a comparison of signed and unsigned integers of equal width requires a conversion of the signed value to unsigned. This can generate unexpected results if the signed value is negative. C supports the use of pointers, a type of reference that records the address or location of an object or function in memory. Pointers can be dereferenced to access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment or pointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type. Pointers are used for many purposes in C. Text strings are commonly manipulated using pointers into arrays of characters. Dynamic memory allocation is performed using pointers; the result of a malloc is usually cast to the data type of the data to be stored. Many data types, such as trees, are commonly implemented as dynamically allocated struct objects linked together using pointers. Pointers to other pointers are often used in multi-dimensional arrays and arrays of struct objects. Pointers to functions (function pointers) are useful for passing functions as arguments to higher-order functions (such as qsort or bsearch), in dispatch tables, or as callbacks to event handlers. A null pointer value explicitly points to no valid location. Dereferencing a null pointer value is undefined, often resulting in a segmentation fault. Null pointer values are useful for indicating special cases such as no "next" pointer in the final node of a linked list, or as an error indication from functions returning pointers. In appropriate contexts in source code, such as for assigning to a pointer variable, a null pointer constant can be written as 0, with or without explicit casting to a pointer type, as the NULL macro defined by several standard headers or, since C23 with the constant nullptr. In conditional contexts, null pointer values evaluate to false, while all other pointer values evaluate to true. Void pointers (void *) point to objects of unspecified type, and can therefore be used as "generic" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type. Careless use of pointers is potentially dangerous. Because they are typically unchecked, a pointer variable can be made to point to any arbitrary location, which can cause undesirable effects. Although properly used pointers point to safe places, they can be made to point to unsafe places by using invalid pointer arithmetic; the objects they point to may continue to be used after deallocation (dangling pointers); they may be used without having been initialized (wild pointers); or they may be directly assigned an unsafe value using a cast, union, or through another corrupt pointer. In general, C is permissive in allowing manipulation of and conversion between pointer types, although compilers typically provide options for various levels of checking. Some other programming languages address these problems by using more restrictive reference types. Array types in C are traditionally of a fixed, static size specified at compile time. The more recent C99 standard also allows a form of variable-length arrays. However, it is also possible to allocate a block of memory (of arbitrary size) at run time, using the standard library's malloc function, and treat it as an array. Since arrays are always accessed (in effect) via pointers, array accesses are typically not checked against the underlying array size, although some compilers may provide bounds checking as an option. Array bounds violations are therefore possible and can lead to various repercussions, including illegal memory accesses, corruption of data, buffer overruns, and run-time exceptions. C does not have a special provision for declaring multi-dimensional arrays, but rather relies on recursion within the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting "multi-dimensional array" can be thought of as increasing in row-major order. Multi-dimensional arrays are commonly used in numerical algorithms (mainly from applied linear algebra) to store matrices. The structure of the C array is well suited to this particular task. However, in early versions of C the bounds of the array must be known fixed values or else explicitly passed to any subroutine that requires them, and dynamically sized arrays of arrays cannot be accessed using double indexing. (A workaround for this was to allocate the array with an additional "row vector" of pointers to the columns.) C99 introduced "variable-length arrays" which address this issue. The following example using modern C (C99 or later) shows allocation of a two-dimensional array on the heap and the use of multi-dimensional array indexing for accesses (which can use bounds-checking on many C compilers): And here is a similar implementation using C99's Auto VLA feature:[f] The subscript notation x[i] (where x designates a pointer) is syntactic sugar for *(x+i). Taking advantage of the compiler's knowledge of the pointer type, the address that x + i points to is not the base address (pointed to by x) incremented by i bytes, but rather is defined to be the base address incremented by i multiplied by the size of an element that x points to. Thus, x[i] designates the i+1th element of the array. Furthermore, in most expression contexts (a notable exception is as operand of sizeof), an expression of array type is automatically converted to a pointer to the array's first element. This implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although function calls in C use pass-by-value semantics, arrays are in effect passed by reference. The total size of an array x can be determined by applying sizeof to an expression of array type. The size of an element can be determined by applying the operator sizeof to any dereferenced element of an array A, as in n = sizeof A. Thus, the number of elements in a declared array A can be determined as sizeof A / sizeof A. Note, that if only a pointer to the first element is available as it is often the case in C code because of the automatic conversion described above, the information about the full type of the array and its length are lost. One of the most important functions of a programming language is to provide facilities for managing memory and the objects that are stored in memory. C provides three principal ways to allocate memory for objects: These three approaches are appropriate in different situations and have various trade-offs. For example, static memory allocation has little allocation overhead, automatic allocation may involve slightly more overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. The persistent nature of static objects is useful for maintaining state information across function calls, automatic allocation is easy to use but stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows convenient allocation of objects whose size is known only at run time. Most C programs make extensive use of all three. Where possible, automatic or static allocation is usually simplest because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. However, many data structures can change in size at run time, and since static allocations (and automatic allocations before C99) must have a fixed size at compile time, there are many situations in which dynamic allocation is necessary. Prior to the C99 standard, variable-sized arrays were a common example of this. (See the article on C dynamic memory allocation for an example of dynamically allocated arrays.) Unlike automatic allocation, which can fail at run time with uncontrolled consequences, the dynamic allocation functions return an indication (in the form of a null pointer value) when the required storage cannot be allocated. (Static allocation that is too large is usually detected by the linker or loader, before the program can even begin execution.) Unless otherwise specified, static objects contain zero or null pointer values upon program startup. Automatically and dynamically allocated objects are initialized only if an initial value is explicitly specified; otherwise they initially have indeterminate values (typically, whatever bit pattern happens to be present in the storage, which might not even represent a valid value for that type). If the program attempts to access an uninitialized value, the results are undefined. Many modern compilers try to detect and warn about this problem, but both false positives and false negatives can occur. Heap memory allocation has to be synchronized with its actual usage in any program to be reused as much as possible. For example, if the only pointer to a heap memory allocation goes out of scope or has its value overwritten before it is deallocated explicitly, then that memory cannot be recovered for later reuse and is essentially lost to the program, a phenomenon known as a memory leak. Conversely, it is possible for memory to be freed but referenced subsequently, leading to unpredictable results. Typically, the failure symptoms appear in a portion of the program unrelated to the code that causes the error, making it difficult to diagnose the failure. Such issues are ameliorated in languages with automatic garbage collection. The C programming language uses libraries as its primary method of extension. In C, a library is a set of functions contained within a single "archive" file. Each library typically has a header file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. For a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requires compiler flags (e.g., -lm, shorthand for "link the math library"). The most common C library is the C standard library, which is specified by the ISO and ANSI C standards and comes with every C implementation (implementations which target limited environments such as embedded systems may provide only a subset of the standard library). This library supports stream input and output, memory allocation, mathematics, character strings, and time values. Several separate standard headers (for example, stdio.h) specify the interfaces for these and other standard library facilities. Another common set of C library functions are those used by applications specifically targeted for Unix and Unix-like systems, especially functions which provide an interface to the kernel. These functions are detailed in various standards such as POSIX and the Single UNIX Specification. Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficient object code; programmers then create interfaces to the library so that the routines can be used from higher-level languages like Java, Perl, and Python. File input and output (I/O) is not part of the C language itself but instead is handled by libraries (such as the C standard library) and their associated header files (e.g. stdio.h). File handling is generally implemented through high-level I/O which works through streams. A stream is from this perspective a data flow that is independent of devices, while a file is a concrete device. The high-level I/O is done through the association of a stream to a file. In the C standard library, a buffer (a memory area or queue) is temporarily used to store data before it is sent to the final destination. This reduces the time spent waiting for slower devices, for example a hard drive or solid-state drive. Low-level I/O functions are not part of the standard C library[clarification needed] but are generally part of "bare metal" programming (programming that is independent of any operating system such as most embedded programming). With few exceptions, implementations include low-level I/O. Language tools A number of tools have been developed to help C programmers find and fix statements with undefined behavior or possibly erroneous expressions, with greater rigor than that provided by the compiler. Automated source code checking and auditing tools exist, such as Lint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler. Also, many compilers can optionally warn about syntactically valid constructs that are likely to actually be errors. MISRA C is a proprietary set of guidelines to avoid such questionable code, developed for embedded systems. There are also compilers, libraries, and operating system level mechanisms for performing actions that are not a standard part of C, such as bounds checking for arrays, detection of buffer overflow, serialization, dynamic memory tracking, and automatic garbage collection. Memory management checking tools like Purify or Valgrind and linking with libraries containing special versions of the memory allocation functions can help uncover run-time errors in memory usage. Uses C has been widely used to implement end-user and system-level applications. C is widely used for systems programming in implementing operating systems and embedded system applications. This is for several reasons: Computer games are often built from a combination of languages. C has featured significantly, especially for those games attempting to obtain best performance from computer platforms. Examples include Doom from 1993. Historically, C was sometimes used for web development using the Common Gateway Interface (CGI) as a "gateway" for information between the web application, the server, and the browser. C may have been chosen over interpreted languages because of its speed, stability, and near-universal availability. It is no longer common practice for web development to be done in C, and many other web development languages are popular. Applications where C-based web development continues include the HTTP configuration pages on routers, IoT devices and similar, although even here some projects have parts in higher-level languages e.g. the use of Lua within OpenWRT. Two popular web servers, Apache HTTP Server and Nginx, are written in C.[better source needed] C's close-to-the-metal approach allows for the construction of these high-performance software systems.[citation needed] C is sometimes used as an intermediate language by implementations of other languages. This approach may be used for portability or convenience; by using C as an intermediate language, additional machine-specific code generators are not necessary. C has some features, such as line-number preprocessor directives and optional superfluous commas at the end of initializer lists, that support compilation of generated code. However, some of C's shortcomings have prompted the development of other C-based languages specifically designed for use as intermediate languages, such as C--. Also, contemporary major compilers GCC and LLVM both feature an intermediate representation that is not C, and those compilers support front ends for many languages including C. C enables programmers to create efficient implementations of algorithms and data structures, because the layer of abstraction from hardware is thin, and its overhead is low, an important criterion for computationally intensive programs. For example, the GNU Multiple Precision Arithmetic Library, the GNU Scientific Library, Mathematica, and MATLAB are completely or partially written in C. Many languages support calling library functions in C; for example, the Python-based framework NumPy uses C for the high-performance and hardware-interacting aspects. A consequence of C's wide availability and efficiency is that compilers, libraries and interpreters of other programming languages are often implemented in C. For example, the reference implementations of Python, Perl, Ruby, and PHP are written in C. Limitations Ritchie himself joked about the limitations of the language that he created: the power of assembly language and the convenience of ... assembly language — Dennis Ritchie While C is popular, influential and hugely successful, it has drawbacks, including: For some purposes, restricted styles of C have been adopted, e.g. MISRA C or CERT C, in an attempt to reduce the opportunity for unwanted behaviour. Databases such as CWE attempt to count the ways that systems in general, especially those coded in C, have potential vulnerabilities, along with recommendations for mitigation. There are tools that can mitigate some of the drawbacks. Contemporary C compilers include checks which may generate warnings to help identify many potential bugs. C's use of pointers can be made less risky by use of instruction set architecture extensions such as CHERI or Permission Overlay Extensions. These techniques change the fundamental nature of pointers at a hardware level to include bounds checks and purposes, which can help prevent buffer over-runs and inappropriate heap accesses. Since the early 2020s the Linux kernel has sections written in Rust, a language which has specific measures to improve safety. Related languages Many languages developed after C were influenced by and borrowed aspects of C, including C++, C#, C shell, D, Go, Java, JavaScript, Julia, Limbo, LPC, Objective-C, Perl, PHP, Python, Ruby, Rust, Swift, Verilog and SystemVerilog. Some claim that the most pervasive influence has been syntactical – that these languages combine the statement and expression syntax of C with type systems, data models and large-scale program structures that differ from those of C, sometimes radically. Several C or near-C interpreters exist, including Ch and CINT, which can also be used for scripting. When object-oriented programming languages became popular, C++ and Objective-C were two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented as source-to-source compilers; source code was translated into C, and then compiled with a C compiler. The C++ programming language (originally named "C with Classes") was devised by Bjarne Stroustrup as an approach to providing object-oriented functionality with a C-like syntax. C++ adds greater typing strength, scoping, and other tools useful in object-oriented programming, and permits generic programming via templates. Nearly a superset of C, C++ now[when?] supports most of C, with a few exceptions. Objective-C was originally a thin layer on top of C, and remains a strict superset of C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C and Smalltalk: syntax that involves preprocessing, expressions, function declarations, and function calls is inherited from C, while the syntax for object-oriented features was originally taken from Smalltalk. In addition to C++ and Objective-C, Ch, Cilk, and Unified Parallel C are nearly supersets of C. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://techcrunch.com/author/connie-loizos/] | [TOKENS: 155]
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Connie Loizos Editor in Chief & General Manager, TechCrunch Latest from Connie Loizos Provides movers and shakers with the info they need to start their day. © 2025 TechCrunch Media LLC.
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Birthday_cake_(8973445388)_(cropped).jpg] | [TOKENS: 102]
File:Birthday cake (8973445388) (cropped).jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 4 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:NOAA_Map_of_the_US_EEZ.svg] | [TOKENS: 269]
File:NOAA Map of the US EEZ.svg Summary Extracted from PDF in source. Licensing العربية ∙ čeština ∙ Deutsch ∙ Zazaki ∙ English ∙ español ∙ eesti ∙ suomi ∙ français ∙ hrvatski ∙ magyar ∙ italiano ∙ 日本語 ∙ 한국어 ∙ македонски ∙ മലയാളം ∙ Plattdüütsch ∙ Nederlands ∙ polski ∙ português ∙ română ∙ русский ∙ sicilianu ∙ slovenščina ∙ Türkçe ∙ Tiếng Việt ∙ 简体中文 ∙ 繁體中文 ∙ +/− File history Click on a date/time to view the file as it appeared at that time. File usage The following 17 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/AmbientTalk] | [TOKENS: 222]
Contents AmbientTalk AmbientTalk is an experimental object-oriented distributed programming language developed at the Programming Technology Laboratory at the Vrije Universiteit Brussel, Belgium. The language is primarily targeted at writing programs deployed in mobile ad hoc networks. AmbientTalk is meant to serve as an experimentation platform to experiment with new language features or programming abstractions to facilitate the construction of software that has to run in highly volatile networks exhibiting intermittent connectivity and little infrastructure. It is implemented in Java which enables interpretation on various platforms, including Android. The interpreter standard library also provides a seamless interface between Java and AmbientTalk objects, called the symbiosis. The language's concurrency features, which include support for futures and event-loop concurrency, are founded on the actor model and have been largely influenced by the E programming language. The language's object-oriented features find their influence in languages like Smalltalk (i.e. block closures, keyworded messages) and Self (prototype-based programming, traits, delegation). Hello world The classical "Hello, World!" program is not very representative of the language features. However, consider its distributed version: References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Exclusive_economic_zone_of_the_United_States] | [TOKENS: 340]
Contents Exclusive economic zone of the United States The United States has the world's second largest exclusive economic zone (EEZ) after France. The total size is 11,351,000 km2 (4,383,000 sq mi)2. Areas of its EEZ are located in three oceans, the Gulf of Mexico, and the Caribbean Sea. Most notable areas are Alaska, Hawaii, the East Coast, West Coast and Gulf Coast of the United States. Geography The EEZ borders with Russia to the north west, Canada to the north, Cuba, Bahamas, Mexico to the south, Dominican Republic, British Virgin Islands, Anguilla to the south east and Samoa, Niue to the south west. The unincorporated territories of Guam, Puerto Rico, U.S. Virgin Islands and Northern Mariana Islands are included. Disputes A wedge-shaped section of the Beaufort Sea is disputed between Canada and the United States, because the area reportedly contains substantial oil reserves. Since 2007, the Dominican Republic in Hispaniola considers itself an archipelagic state, encroaching the long-established median or equidistance line dividing the EEZ of the Dominican Republic and Puerto Rico, and claiming portion of the EEZ claimed by the United States in relation to the archipelago of Puerto Rico, which is itself an unincorporated U.S. territory. The United States does not accept the archipelagic status and maritime boundaries claimed by the Dominican Republic. Victor Prescott, an authority in the field of maritime boundaries, argued that, as the coasts of both states are short coastlines with few offshore islands, an equidistance line is appropriate. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Consumer_electronics] | [TOKENS: 3936]
Contents Consumer electronics Consumer electronics, also known as home electronics, are electronic devices intended for everyday household use. Consumer electronics include those used for entertainment, communications, and recreation. Historically, these products were referred to as "black goods" in American English due to many products being housed in black or dark casings. This term is used to distinguish them from "white goods", which are meant for housekeeping tasks, such as washing machines and refrigerators. In British English, they are often called "brown goods" by producers and sellers. Since the 2010s, this distinction has been absent in big box consumer electronics stores, whose inventories include entertainment, communication, and home office devices, as well as home appliances. Radio broadcasting in the early 20th century brought the first major consumer product, the broadcast receiver. Later products included telephones, televisions, calculators, cameras, video game consoles, mobile phones, personal computers, and MP3 players. In the 2010s, consumer electronics stores often sold GPS, automotive electronics (vehicle audio), video game consoles, electronic musical instruments (e.g., synthesizer keyboards), karaoke machines, digital cameras, and video players (VCRs in the 1980s and 1990s, followed by DVD players and Blu-ray players). Stores also sold smart light fixtures, network devices, camcorders, and smartphones. Some of the modern products being sold include virtual reality goggles, smart home devices that connect to the Internet, streaming devices, and wearable technology. In the 2010s, most consumer electronics were based on digital technologies and increasingly merged with the computer industry, in a trend often referred to as the consumerization of information technology. Some consumer electronics stores also began selling office and baby furniture. Consumer electronics stores may be physical "brick and mortar" retail stores, online stores, or combinations of both. Annual consumer electronics sales were expected to reach $2.9 trillion by 2020. The sector is part of the electronics industry, which is, in turn, driven by the semiconductor industry. History For its first fifty years, the phonograph turntable did not use electronics; the needle and sound horn were purely mechanical technologies. However, in the 1920s, radio broadcasting became the basis of the mass production of radio receivers. The vacuum tubes that had made radios practical were used with record players as well. This was to amplify the sound so that it could be played through a loudspeaker. Television was invented soon after, but remained insignificant in the consumer market until the 1950s. The first working transistor, a point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, which led to significant research in the field of solid-state semiconductors in the early-1950s. The invention and development of the earliest transistors at Bell led to transistor radios, in turn promoting the emergence of the home entertainment consumer electronics industry starting in the 1950s. This was largely due to the efforts of Tokyo Tsushin Kogyo (now Sony) in successfully commercializing transistor technology for a mass market, with affordable transistor radios and then transistorized television sets. Integrated circuits (ICs) followed when manufacturers built circuits (usually for military purposes) on a single substrate using electrical connections between circuits within the chip itself. IC technology led to more advanced and cheaper consumer electronics, such as transistorized televisions, pocket calculators, and by the 1980s, video game consoles and personal computers affordable for regular middle-class families. Beginning in the 1980s and continuing through the early 2000s, many consumer electronics, such as televisions and stereo systems, underwent digitization. The introduction of compact discs (CDs) and personal computers during this period signalled a broader shift as digital computer technology and digital signals were increasingly integrated into consumer devices. This transformation significantly altered their functionality and led to improved performance, such as enhanced image quality in televisions. These advancements were largely driven by Moore’s Law, which enabled rapid increases in processing power and reductions in cost and size. In 2004, the consumer electronics industry was worth US $240 billion annually worldwide, comprising visual equipment, audio equipment, and games consoles. The industry became global, with Asia Pacific having a 35% market share, Europe having 31.5%, the US having 23%, and the rest of the world owning the remainder. Major players in this industry are household names like Sony, Samsung, Philips, Sanyo, and Sharp. The increase in popularity of such domestic appliances as 'white goods' is a characteristic element of consumption patterns during the golden age of the Western economy. Europe's White Goods industry has evolved over the past 40 years, first by changing tariff barriers, and later by technical and demand shifts. The spending on domestic appliances has claimed only a tiny fraction of disposable income, rising from 0.5 percent in the US in 1920 to about 2 percent in 1980. Yet, the sequence of electrical and mechanical durables have altered the activities and experiences of households in America and Britain in the twentieth century. With the expansion of cookers, vacuum cleaners, refrigerators, washing machines, radios, televisions, air conditioning, and microwave ovens, households have gained an escalating number of appliances. Despite the ubiquity of these goods, their diffusion is not well understood. Some types of appliances diffuse more frequently than others. In particular, home entertainment appliances such as radio and television have diffused much faster than household and kitchen machines." Products Consumer electronics devices include those used for: Consumer electronics products such as the digital distribution of video games have become increasingly based on the internet and digital technologies. The consumer electronics industry has primarily merged with the software industry in what is increasingly referred to as the consumerization of information technology. (est.billion) One overriding characteristic of consumer electronic products is the trend of ever-falling prices. This is driven by gains in manufacturing efficiency and automation, lower labor costs as manufacturing has moved to lower-wage countries, and improvements in semiconductor design. Semiconductor components benefit from Moore's law, an observed principle which states that, for a given price, semiconductor functionality doubles every two years. While consumer electronics continues in its trend of convergence, combining elements of many products, consumers face different purchasing decisions. There is an ever-increasing need to keep product information updated and comparable for the consumer to make an informed choice. Style, price, specification, and performance are all relevant. There is a gradual shift towards e-commerce web-storefronts. Many products include Internet access using technologies such as Wi-Fi, Bluetooth, EDGE, or Ethernet. Products not traditionally associated with computer use (such as TVs or Hi-fi equipment) now provide options to connect to the Internet or to a computer using a home network to provide access to digital content. The desire for high-definition (HD) content has led the industry to develop a number of technologies, such as WirelessHD or ITU-T G.hn, which are optimized for distribution of HD content between consumer electronic devices in a home. The consumer electronics industry faces consumers with unpredictable tastes on the demand side, supplier-related delays or disruptions on the supply side, and production challenges occurring in the process. The high rate of technology evolution or revolution requires large investments without any guarantee of profitable returns. As a result, the big players rely on global markets to achieve economies of scale. Even these companies sometimes have to cooperate with each other, for instance on standards, to reduce the risk of their investments. In supply chain management, there is much discussion on risks related to such aspects of supply chains as short product lifecycles, high competition combined with cooperation, and globalization. The consumer electronics industry is the very embodiment of these aspects of supply chain management and related risks. While some of the supply and demand related risks are similar to such industries as the toy industry, the consumer electronics industry faces additional risks due to its vertically integrated supply chains. There are also numerous supply-chain-wide contextual risks that cut across the supply chain especially impacting companies with global supply chains. These include cultural differences in multinational operations, environmental risk, regulations risk, and exchange rate risk across multiple countries. Whether or not demand is comparable across countries affects the extent of the gains from international integration. In addition, consumer preferences change over time to disturb existing patterns of behavior. A feature of some industries is that demand for variety increases as the market moves from first-time buying to replacement demand. A resource to further understand this idea of consumer preferences can be observed through Lizabeth Cohen's book titled, "A Consumers' Republic", "Only if we have large demands can we expect large production". Industries The electronics industry, especially consumer electronics, emerged in the 20th century and has become a global industry worth billions of dollars. Contemporary society uses all manner of electronic devices built-in automated or semi-automated factories operated by the industry. Most consumer electronics are manufactured in China, due to maintenance cost, availability of materials, quality, and speed as opposed to other countries such as the United States. Cities such as Shenzhen and Dongguan have become important production centers for the industry, attracting many consumer electronics companies such as Apple Inc. An electronic component is any essential discrete device or physical entity in an electronic system used to affect electrons or their associated fields. Electronic components are mostly industrial products, available in a singular form, and are not to be confused with electrical elements, conceptual abstractions representing idealized electronic components. Consumer electronics such as personal computers use various types of software. Embedded software is used within some consumer electronics, such as mobile phones. This type of software may be embedded within the hardware of electronic devices. Some consumer electronics include software that is used on a personal computer in conjunction with electronic devices, such as camcorders and digital cameras, and third-party software for such devices also exists. Some consumer electronics adhere to protocols, such as connection protocols "to high speed bi-directional signals". In telecommunications, a communications protocol is a system of digital rules for data exchange within or between computers. The Consumer Electronics Show (CES) trade show has taken place yearly in Las Vegas, Nevada since its foundation in 1973. The event, which grew from having 100 exhibitors in its inaugural year to more than 4,500 exhibiting companies in its 2020 edition, features the latest in consumer electronics, speeches by industry experts and innovation awards. The IFA Berlin trade show has taken place in Berlin, Germany since its foundation in 1924. The event features new consumer electronics and speeches by industry pioneers. Institute of Electrical and Electronics Engineers (IEEE), the world's largest professional society, has many initiatives to advance the state of the art of consumer electronics. IEEE has a dedicated society of thousands of professionals to promote CE, called the Consumer Electronics Society (CESoc). IEEE has multiple periodicals and international conferences to promote CE and encourage collaborative research and development in CE. The flagship conference of CESoc, called IEEE International Conference on Consumer Electronics (ICCE), is in its 35th year. Institute of Electrical and Electronics Engineers (IEEE) Computer Society also have initiated a conference to research on next generation consumer electronics as Smart Electronics. The conference, named IEEE Symposium on Smart Electronics Systems (IEEE-iSES) is on its 9th year. Electronics retailing is a significant part of the retail industry in many countries. In the United States, dedicated consumer electronics stores have mostly given way to big-box stores such as Best Buy, the largest consumer electronics retailer in the country, although smaller dedicated stores include Apple Stores, and specialist stores that serve, for example, audiophiles, such as the single-branch B&H Photo store in New York City. Broad-based retailers, such as Walmart and Target, also sell consumer electronics in many of their stores. In April 2014, retail e-commerce sales were the highest in the consumer electronic and computer categories as well. Some consumer electronics retailers offer extended warranties on products with programs such as SquareTrade. An electronics district is an area of commerce with a high density of retail stores that sell consumer electronics. Consumer electronic service can refer to the maintenance of said products. When consumer electronics have malfunctions, they may sometimes be repaired. In 2013, in Pittsburgh, Pennsylvania, the increased popularity in listening to sound from analog audio devices, such as phonographs, as opposed to digital sound, has sparked a noticeable increase of business for the electronic repair industry there. A mobile phone, cellular phone, cell phone, cellphone, handphone, or hand phone, sometimes shortened to simply mobile, cell or just phone, is a portable telephone that can make and receive calls over a radio frequency link while the user is moving within a telephone service area. The radio frequency link establishes a connection to the switching systems of a mobile phone operator, which provides access to the public switched telephone network (PSTN). Modern mobile telephone services use a cellular network architecture and, therefore, mobile telephones are called cellular telephones or cell phones in North America. In addition to telephony, digital mobile phones (2G) support a variety of other services, such as text messaging, MMS, email, Internet access, short-range wireless communications (infrared, Bluetooth), business applications, video games and digital photography. Mobile phones offering only those capabilities are known as feature phones; mobile phones which offer greatly advanced computing capabilities are referred to as smartphones. A smartphone is a portable device that combines mobile telephone and computing functions into one unit. They are distinguished from feature phones by their stronger hardware capabilities and extensive mobile operating systems, which facilitate wider software, internet (including web navigation over mobile broadband), and multimedia functionality (including music, video, cameras, and gaming), alongside core phone functions such as voice calls and text messaging. Smartphones typically contain a number of MOSFET integrated circuit (IC) chips, include various sensors that can be leveraged by pre-included and third-party software (such as a magnetometer, proximity sensors, barometer, gyroscope, accelerometer and more), and support wireless communications protocols (such as Bluetooth, Wi-Fi, or satellite navigation). Environmental impact In 2017, Greenpeace USA published a study of 17 of the world's leading consumer electronics companies about their energy and resource consumption and the use of chemicals. Electronic devices use thousands of rare metals and rare earth elements (40 on average for a smartphone), these materials are extracted and refined using water and energy-intensive processes. These metals are also used in the renewable energy industry meaning that consumer electronics are directly competing for the raw materials. The energy consumption of consumer electronics and their environmental impact, either from their production processes or the disposal of the devices, is increasing steadily. EIA estimates that electronic devices and gadgets account for about 10%–15% of the energy use in American homes – largely because of their number; the average house has dozens of electronic devices. The energy consumption of consumer electronics increases – in America and Europe – to about 50% of household consumption if the term is redefined to include home appliances such as refrigerators, dryers, clothes washers and dishwashers. Standby power – used by consumer electronics and appliances while they are turned off – accounts for 5–10% of total household energy consumption, costing $100 annually to the average household in the United States. A study by United States Department of Energy's Berkeley Lab found that videocassette recorders (VCRs) consume more electricity during the course of a year in standby mode than when they are used to record or playback videos. Similar findings were obtained concerning satellite boxes, which consume almost the same amount of energy in "on" and "off" modes. A 2012 study in the United Kingdom, carried out by the Energy Saving Trust, found that the devices using the most power on standby mode included televisions, satellite boxes, and other video and audio equipment. The study concluded that UK households could save up to £86 per year by switching devices off instead of using standby mode. A report from the International Energy Agency in 2014 found that $80 billion of power is wasted globally per year due to inefficiency of electronic devices. Consumers can reduce unwanted use of standby power by unplugging their devices, using power strips with switches, or by buying devices that are standardized for better energy management, particularly Energy Star-marked products. A high number of different metals and low concentration rates in electronics means that recycling is limited and energy intensive. Electronic waste describes discarded electrical or electronic devices. Many consumer electronics may contain toxic minerals and elements, and many electronic scrap components, such as CRTs, may contain contaminants such as lead, cadmium, beryllium, mercury, dioxins, or brominated flame retardants. Electronic waste recycling may involve significant risk to workers and communities and great care must be taken to avoid unsafe exposure in recycling operations and leaking of materials such as heavy metals from landfills and incinerator ashes. However, large amounts of the produced electronic waste from developed countries is exported, and handled by the informal sector in countries like India, despite the fact that exporting electronic waste to them is illegal. Strong informal sector can be a problem for the safe and clean recycling. E-waste policies have evolved since the 1970s, with priorities shifting over time. Initially, the focus was on safer disposal methods due to the toxic materials often found in electronic waste. Over the years, attention turned to the recovery of valuable metals and plastics that could be recycled. More recently, the emphasis has shifted once again, this time toward reusing entire devices. New guidelines promoting 'preparation for reuse' highlight the growing importance of repair and reuse, signaling a gradual change in public and policy attitudes. With turnover of small household appliances high and costs relatively low, many consumers will throw unwanted electrical goods in the normal dustbin, meaning that items of potentially high reuse or recycling value go to landfills. While more oversized items such as washing machines are usually collected, it has been estimated that the 160,000 tonnes of EEE in regular waste collections were worth £220 million. 23% of EEE (Electrical and electronic equipment) taken to Household Waste Recycling Centres was immediately resalable – or would be with minor repairs or refurbishment. This indicates a lack of awareness among consumers about where and how to dispose of EEE and the potential value of things that are going in the bin. For the reuse and repair of electrical goods to increase substantially in the UK, some barriers must be overcome. These include people's mistrust of used equipment in terms of whether it will be functional, safe and the stigma for some of owning second-hand goods. But the benefits of reuse could allow lower-income households access to previously unaffordable technology while helping the environment at the same time. Health impact Desktop monitors and laptops can contribute to major physical health concerns known as repetitive strain injuries. For example, when users are forced to bend to see electronic screens better, they may experience chronic neck and back pains. The best-known disease in this category is carpal tunnel syndrome. Other conditions include de Quervain syndrome, a condition that affects tendons in the thumb. Electronic use before bed is also associated with poorer sleep quality and sleep duration. Poor quality, shorter sleep is associated with various health conditions such as obesity and diabetes. See also References Notes Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-AutoNT-1-19] | [TOKENS: 4314]
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/CD] | [TOKENS: 7707]
Contents Compact disc The compact disc (CD) is a digital optical disc data storage format co-developed by Philips and Sony to store and play digital audio recordings. It employs the Compact Disc Digital Audio (CD-DA) standard and is capable of holding uncompressed stereo audio. First released in Japan in October 1982, the CD was the second optical disc format to reach the market, following the larger LaserDisc (LD). In later years, the technology was adapted for computer data storage as CD-ROM and subsequently expanded into various writable and multimedia formats. As of 2007[update], over 200 billion CDs (including audio CDs, CD-ROMs, and CD-Rs) had been sold worldwide. Standard CDs have a diameter of 120 mm (4.7 in) and typically hold up to 74 minutes of audio or approximately 650 MiB (681,574,400 bytes) of data. This was later regularly extended to 80 minutes or 700 MiB (734,003,200 bytes) by reducing the spacing between data tracks, with some discs unofficially reaching up to 99 minutes or 870 MiB (912,261,120 bytes) which falls outside established specifications. Smaller variants, such as the Mini CD, range from 60 to 80 mm (2.4 to 3.1 in) in diameter and have been used for CD singles or distributing device drivers and software. The CD gained widespread popularity in the late 1980s and early 1990s. By 1991, it had surpassed the phonograph record and the cassette tape in sales in the United States, becoming the dominant physical audio format. By 2000, CDs accounted for 92.3% of the U.S. music market share. The CD is widely regarded as the final dominant format of the album era, before the rise of MP3, digital downloads, and streaming platforms in the mid-2000s led to its decline. Beyond audio playback, the compact disc was adapted for general-purpose data storage under the CD-ROM format, which initially offered more capacity than contemporary personal computer hard disk drives. Additional derived formats include write-once discs (CD-R), rewritable media (CD-RW), and multimedia applications such as Video CD (VCD), Super Video CD (SVCD), Photo CD, Picture CD, Compact Disc Interactive (CD-i), Enhanced Music CD, and Super Audio CD (SACD), the latter of which can include a standard CD-DA layer for backward compatibility. History The optophone, first presented in 1913, was an early device that used light for both recording and playback of sound signals on a transparent photograph. More than thirty years later, American inventor James T. Russell has been credited with inventing the first system to record digital media on a photosensitive plate. Russell's patent application was filed in 1966, and he was granted a patent in 1970. Following litigation, Sony and Philips licensed Russell's patents for recording in 1988. It is debatable whether Russell's concepts, patents, and prototypes instigated and in some measure influenced the compact disc's design. The compact disc is an evolution of LaserDisc technology, where a focused laser beam is used that enables the high information density required for high-quality digital audio signals. Unlike the prior art by Optophonie and James Russell, the information on the disc is read from a reflective layer using a laser as a light source through a protective substrate. Prototypes were developed by Philips and Sony independently in the late 1970s. Although originally dismissed by Philips Research management as a trivial pursuit, the CD became the primary focus for Philips as the LaserDisc format struggled. In 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. The group of experts analyzed every detail of the proposed CD system and meet every two months alternating between Eindhoven and Tokyo for discussions. Each time, the experiments conducted were discussed and the best solution was chosen from the prototypes developed by Sony and Philips. After experimentation, the group decided to adopt Sony’s error correction system, CIRC. Immink, in a few months' time, developed the recording code called eight-to-fourteen modulation (EFM). EFM increases the playing time by more than 30% compared to the code used in the Philips prototype, without causing any issues with tracking. Sony and Philips decide to include EFM in the official Philips/Sony CD standard. EFM and Sony’s error correction code, CIRC are the only standard essential patents, (SEP)s, of the compact disc. After a year of experimentation and discussion, the Red Book CD-DA standard was published in 1980. After their commercial release in 1982, compact discs and their players were extremely popular. Despite costing up to $1,000, over 400,000 CD players were sold in the United States between 1983 and 1984. By 1988, CD sales in the United States surpassed those of vinyl LPs, and, by 1992, CD sales surpassed those of prerecorded music-cassette tapes. The success of the compact disc has been credited to the cooperation between Philips and Sony, which together agreed upon and developed compatible hardware. The unified design of the compact disc allowed consumers to purchase any disc or player from any company and allowed the CD to dominate the at-home music market unchallenged. In 1974, Lou Ottens, director of the audio division of Philips, started a small group to develop an analog optical audio disc with a diameter of 20 cm (7.9 in) and a sound quality superior to that of the vinyl record. However, due to the unsatisfactory performance of the analog format, two Philips research engineers recommended a digital format in March 1974. In 1977, Philips then established a laboratory with the mission of creating a digital audio disc. The diameter of Philips's prototype compact disc was set at 11.5 cm (4.5 in), the diagonal of an audio cassette. Heitaro Nakajima, who developed an early digital audio recorder within Japan's national public broadcasting organization, NHK, in 1970, became general manager of Sony's audio department in 1971. In 1973, his team developed a digital PCM adaptor that made audio recordings using a Betamax video recorder. After this, in 1974 the leap to storing digital audio on an optical disc was easily made. Sony first publicly demonstrated an optical digital audio disc in September 1976. A year later, in September 1977, Sony showed the press a 30 cm (12 in) disc that could play an hour of digital audio (44,100 Hz sampling rate and 16-bit resolution) using modified frequency modulation encoding. In September 1978, Sony demonstrated an optical digital audio disc with a diameter of 30 cm (12 in) with a 150-minute playing time, 44,056 Hz sampling rate, 16-bit linear resolution, and cross-interleaved Reed-Solomon coding (CIRC) error correction code—specifications similar to those later settled upon for the standard compact disc format in 1980. Technical details of Sony's digital audio disc were presented during the 62nd AES Convention, held on 13–16 March 1979, in Brussels. Sony's AES technical paper was published on 1 March 1979. A week later, on 8 March, Philips publicly demonstrated a prototype of an optical digital audio disc at a press conference called "Philips Introduce Compact Disc" in Eindhoven, Netherlands. Sony executive Norio Ohga, later CEO and chairman of Sony, and Heitaro Nakajima were convinced of the format's commercial potential and pushed further development despite widespread skepticism. In 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. Led by engineers Kees Schouhamer Immink and Toshitada Doi, the research pushed forward laser and optical disc technology. After a year of experimentation and discussion, the task force produced the Red Book CD-DA standard. First published in 1980, the standard was formally adopted by the IEC as an international standard in 1987, with various amendments becoming part of the standard in 1996.[citation needed] Philips coined the term compact disc in line with another audio product, the Compact Cassette, and contributed the general manufacturing process, based on video LaserDisc technology. Philips also contributed eight-to-fourteen modulation (EFM), while Sony contributed the error-correction method, CIRC, which offers resilience to defects such as scratches and fingerprints. The Compact Disc Story, told by a former member of the task force, gives background information on the many technical decisions made, including the choice of the sampling frequency, playing time, and disc diameter. The task force consisted of around 6 persons, though according to Philips, the compact disc was "invented collectively by a large group of people working as a team". Early milestones in the launch and adoption of the format included: The first artist to sell a million copies on CD was Dire Straits, with their 1985 album Brothers in Arms. One of the first CD markets was devoted to reissuing popular music whose commercial potential was already proven. The first major artist to have their entire catalog converted to CD was David Bowie, whose first fourteen studio albums (up to Scary Monsters (and Super Creeps)) of (then) sixteen were made available by RCA Records in February 1985, along with four greatest hits albums; his fifteenth and sixteenth albums (Let's Dance and Tonight, respectively) had already been issued on CD by EMI Records in 1983 and 1984, respectively. On 26 February 1987, the first four UK albums by the Beatles were released in mono on compact disc. The growing acceptance of the CD in 1983 marked the beginning of the popular digital audio revolution. It was enthusiastically received, especially in the early-adopting classical music and audiophile communities, and its handling quality received particular praise. As the price of players gradually came down, and with the introduction of the portable Discman, the CD began to gain popularity in the larger popular and rock music markets. With the rise in CD sales, pre-recorded cassette tape sales began to decline in the late 1980s; CD sales overtook cassette sales in the early 1990s. In 1988, 400 million CDs were manufactured by 50 pressing plants around the world. Early CD players employed binary-weighted digital-to-analog converters (DAC), which contained individual electrical components for each bit of the DAC. Even when using high-precision components, this approach was prone to decoding errors.[clarification needed] Another issue was jitter, a time-related defect. Confronted with the instability of DACs, manufacturers initially turned to increasing the number of bits in the DAC and using several DACs per audio channel, averaging their output. This increased the cost of CD players but did not solve the core problem. A breakthrough in the late 1980s culminated in development of the 1-bit DAC, which converts high-resolution low-frequency digital input signal into a lower-resolution high-frequency signal that is mapped to voltages and then smoothed with an analog filter. The temporary use of a lower-resolution signal simplified circuit design and improved efficiency, which is why it became dominant in CD players starting from the early 1990s. Philips used a variation of this technique called pulse-density modulation (PDM), while Matsushita (now Panasonic) chose pulse-width modulation (PWM), advertising it as MASH, which is an acronym derived from their patented Multi-stAge noiSe-sHaping PWM topology. The CD was primarily planned as the successor to the vinyl record for playing music, rather than as a data storage medium. However, CDs have grown to encompass other applications. In 1983, following the CD's introduction, Immink and Joseph Braat presented the first experiments with erasable compact discs during the 73rd AES Convention. In June 1985, the computer-readable CD-ROM (read-only memory) and, in 1990, recordable CD-R discs were introduced.[a] Recordable CDs became an alternative to tape for recording and distributing music and could be duplicated without degradation in sound quality. Other newer video formats such as DVD and Blu-ray use the same physical geometry as CD, and most DVD and Blu-ray players are backward compatible with audio CDs. CD sales in the United States peaked by 2000. By the early 2000s, the CD player had largely replaced the audio cassette player as standard equipment in new automobiles, with 2010 being the final model year for any car in the United States to have a factory-equipped cassette player. Two new formats were marketed in the 2000s designed as successors to the CD: the Super Audio CD (SACD) and DVD-Audio. However neither of these were adopted partly due to increased relevance of digital (virtual) music and the apparent lack of audible improvements in audio quality to most human ears. These effectively extended the CD's longevity in the music market. With the advent and popularity of Internet-based distribution of files in lossy-compressed audio formats such as MP3, sales of CDs began to decline in the 2000s. For example, between 2000 and 2008, despite overall growth in music sales and one anomalous year of increase, major-label CD sales declined overall by 20%. Despite rapidly declining sales year-over-year, the pervasiveness of the technology lingered for a time, with companies placing CDs in pharmacies, supermarkets, and filling station convenience stores to target buyers less likely to be able to use Internet-based distribution. In 2012, CDs and DVDs made up only 34% of music sales in the United States. By 2015, only 24% of music in the United States was purchased on physical media, two thirds of this consisting of CDs; however, in the same year in Japan, over 80% of music was bought on CDs and other physical formats. In 2018, U.S. CD sales were 52 million units—less than 6% of the peak sales volume in 2000. In the UK, 32 million units were sold, almost 100 million fewer than in 2008. In 2018, Best Buy announced plans to decrease their focus on CD sales, however, while continuing to sell records, sales of which are growing during the vinyl revival. During the 2010s, the increasing popularity of solid-state media and music streaming services caused automakers to remove automotive CD players in favor of minijack auxiliary inputs, wired connections to USB devices and wireless Bluetooth connections. Automakers viewed CD players as using up valuable space and taking up weight which could be reallocated to more popular features, like large touchscreens. By 2021, only Lexus and General Motors were still including CD players as standard equipment with certain vehicles. CDs continued to be strong in some markets such as Japan where 132 million units were produced in 2019. The decline in CD sales has slowed in recent years; in 2021, CD sales increased in the US for the first time since 2004, with Axios citing its rise to "young people who are finding they like hard copies of music in the digital age". It came at the same time as both vinyl and cassette reached sales levels not seen in 30 years. The RIAA reported that CD revenue made a dip in 2022, before increasing again in 2023 and overtook downloading for the first time in over a decade. In the US, 33.4 million CD albums were sold in the year 2022. In France in 2023, 10.5 million CDs were sold, almost double that of vinyl, but both of them represented generated 12% each of the French music industry revenues. Sony and Philips received praise for the development of the compact disc from professional organizations. These awards include: Physical details A CD is made from 1.2-millimetre (0.047 in) thick, polycarbonate plastic, and weighs 14–33 grams. From the center outward, components are: the center spindle hole (15 mm), the first-transition area (clamping ring), the clamping area (stacking ring), the second-transition area (mirror band), the program (data) area, and the rim. The inner program area occupies a radius from 25 to 58 mm. A thin layer of aluminum or, more rarely, gold is applied to the surface, making it reflective. The metal is protected by a film of lacquer normally spin coated directly on the reflective layer. The label is printed on the lacquer layer, usually by screen printing or offset printing. CD data is represented as tiny indentations known as pits, encoded in a spiral track molded into the top of the polycarbonate layer. The areas between pits are known as lands. Each pit is approximately 100 nm deep by 500 nm wide, and varies from 850 nm to 3.5 μm in length. The distance between the windings (the pitch) is 1.6 μm (measured center-to-center, not between the edges). When playing an audio CD, a motor within the CD player spins the disc to a scanning velocity of 1.2–1.4 m/s (constant linear velocity, CLV)—equivalent to approximately 500 RPM at the inside of the disc, and approximately 200 RPM at the outside edge. The track on the CD begins at the inside and spirals outward so a disc played from beginning to end slows its rotation rate during playback. The program area is 86.05 cm2 and the length of the recordable spiral is 86.05 cm2 / 1.6 μm = 5.38 km. With a scanning speed of 1.2 m/s, the playing time is 74 minutes or 650 MiB of data on a CD-ROM. A disc with data packed slightly more densely is tolerated by most players (though some old ones fail). Using a linear velocity of 1.2 m/s and a narrower track pitch of 1.5 μm increases the playing time to 80 minutes, and data capacity to 700 MiB. Even denser tracks are possible, with semi-standard 90 minute/800 MiB discs having 1.33 μm, and 99 minute/870 MiB having 1.26 μm, but compatibility suffers as density increases. A CD is read by focusing a 780 nm wavelength (near infrared) semiconductor laser (early players used He‍–‍Ne laser) through the bottom of the polycarbonate layer. The change in height between pits and lands results in a difference in the way the light is reflected. Because the pits are indented into the top layer of the disc and are read through the transparent polycarbonate base, the pits form bumps when read. The laser hits the disc, casting a circle of light wider than the modulated spiral track reflecting partially from the lands and partially from the top of any bumps where they are present. As the laser passes over a pit (bump), its height means that the round trip path of the light reflected from its peak is 1/2 wavelength out of phase with the light reflected from the land around it. This is because the height of a bump is around 1/4 of the wavelength of the light used, so the light falls 1/4 out of phase before reflection and another 1/4 wavelength out of phase after reflection. This causes partial cancellation of the laser's reflection from the surface. By measuring the reflected intensity change with a photodiode, a modulated signal is read back from the disc. To accommodate the spiral pattern of data, the laser is placed on a mobile mechanism within the disc tray of any CD player. This mechanism typically takes the form of a sled that moves along a rail. The sled can be driven by a worm gear or linear motor. Where a worm gear is used, a second shorter-throw linear motor, in the form of a coil and magnet, makes fine position adjustments to track eccentricities in the disc at high speed. Some CD drives (particularly those manufactured by Philips during the 1980s and early 1990s) use a swing arm similar to that seen on a gramophone. The pits and lands do not directly represent the 0s and 1s of binary data. Instead, non-return-to-zero, inverted encoding is used: a change from either pit to land or land to pit indicates a 1, while no change indicates a series of 0s. There must be at least two, and no more than ten 0s between each 1, which is defined by the length of the pit. This, in turn, is decoded by reversing the eight-to-fourteen modulation used in mastering the disc, and then reversing the cross-interleaved Reed–Solomon coding, finally revealing the raw data stored on the disc. These encoding techniques (defined in the Red Book) were originally designed for CD Digital Audio, but they later became a standard for almost all CD formats (such as CD-ROM). CDs are susceptible to damage during handling and from environmental exposure. Pits are much closer to the label side of a disc, enabling defects and contaminants on the clear side to be out of focus during playback. Consequently, CDs are more likely to suffer damage on the label side of the disc. Scratches on the clear side can be repaired by refilling them with similar refractive plastic or by careful polishing. The edges of CDs are sometimes incompletely sealed, allowing gases and liquids to enter the CD and corrode the metal reflective layer and/or interfere with the focus of the laser on the pits, a condition known as disc rot. The fungus Geotrichum candidum has been found—under conditions of high heat and humidity—to consume the polycarbonate plastic and aluminium found in CDs. The data integrity of compact discs can be measured using surface error scanning, which can measure the rates of different types of data errors, known as C1, C2, CU and extended (finer-grain) error measurements known as E11, E12, E21, E22, E31 and E32, of which higher rates indicate a possibly damaged or unclean data surface, low media quality, deteriorating media and recordable media written to by a malfunctioning CD writer. Error scanning can reliably predict data losses caused by media deterioration. Support of error scanning differs between vendors and models of optical disc drives, and extended error scanning (known as "advanced error scanning" in Nero DiscSpeed) which reports the six aforementioned E-type errors has only been available on Plextor and some BenQ optical drives so far, as of 2020. The digital data on a CD begins at the inside near the spindle hole and spirals outward toward the edge in a single track. The outward spiral allows adaptation to different-sized discs. Standard CDs are available in two sizes. By far, the most common is 120 millimetres (4.7 in) in diameter, with a 74-, 80, 90, or 99-minute audio capacity and a 650, 700, 800, or 870 MiB (737,280,000-byte) data capacity. Discs are 1.2 millimetres (0.047 in) thick, with a 15 millimetres (0.59 in) center hole. The size of the hole was chosen by Joop Sinjou and based on a Dutch 10-cent coin: a dubbeltje. Philips/Sony patented the physical dimensions. The official Philips history says the capacity was specified by Sony executive Norio Ohga to be able to contain the entirety of Beethoven's Ninth Symphony on one disc. According to Philips chief engineer Kees Immink, this is a myth, as the EFM code format had not yet been decided in December 1979, when the 120 mm size was adopted. The adoption of EFM in June 1980 allowed 30 percent more playing time that would have resulted in 97 minutes for 120 mm diameter or 74 minutes for a disc as small as 100 millimetres (3.9 in). Instead, the information density was lowered by 30 percent to keep the playing time at 74 minutes. The 120 mm diameter has been adopted by subsequent formats, including Super Audio CD, DVD, HD DVD, and Blu-ray Disc. The 80-millimetre (3.1 in) diameter discs ("Mini CDs") can hold up to 24 minutes of music or 210 MiB. SHM-CD (short for Super High Material Compact Disc) is a variant of the Compact Disc, which replaces the polycarbonate base with a proprietary material. This material was created during joint research by Universal Music Japan and JVC into manufacturing high-clarity liquid-crystal displays. SHM-CDs are fully compatible with all CD players since the difference in light refraction is not detected as an error. JVC claims that the greater fluidity and clarity of the material used for SHM-CDs results in a higher reading accuracy and improved sound quality. However, since the CD-Audio format contains inherent error correction, it is unclear whether a reduction in read errors would be great enough to produce an improved output. Logical format The logical format of an audio CD (officially Compact Disc Digital Audio or CD-DA) is described in a document produced in 1980 by the format's joint creators, Sony and Philips. The document is known colloquially as the Red Book CD-DA after the color of its cover. The format is a two-channel 16-bit PCM encoding at a 44.1 kHz sampling rate per channel. Four-channel sound was to be an allowable option within the Red Book format, but has never been implemented. Monaural audio has no existing standard on a Red Book CD; thus, the mono source material is usually presented as two identical channels in a standard Red Book stereo track (i.e., mirrored mono); an MP3 CD can have audio file formats with mono sound. CD-Text is an extension of the Red Book specification for an audio CD that allows for the storage of additional text information (e.g., album name, song name, artist) on a standards-compliant audio CD. The information is stored either in the lead-in area of the CD, where there are roughly five kilobytes of space available or in the subcode channels R to W on the disc, which can store about 31 megabytes. Compact Disc + Graphics is a special audio compact disc that contains graphics data in addition to the audio data on the disc. The disc can be played on a regular audio CD player, but when played on a special CD+G player, it can output a graphics signal (typically, the CD+G player is hooked up to a television set or a computer monitor); these graphics are almost exclusively used to display lyrics on a television set for karaoke performers to sing along with. The CD+G format takes advantage of the channels R through W. These six bits store the graphics information. CD + Extended Graphics (CD+EG, also known as CD+XG) is an improved variant of the Compact Disc + Graphics (CD+G) format. Like CD+G, CD+EG uses basic CD-ROM features to display text and video information in addition to the music being played. This extra data is stored in subcode channels R-W. Very few CD+EG discs have been published. Super Audio CD (SACD) is a high-resolution, read-only optical audio disc format that was designed to provide higher-fidelity digital audio reproduction than the Red Book. Introduced in 1999, it was developed by Sony and Philips, the same companies that created the Red Book. SACD was in a format war with DVD-Audio, but neither has replaced audio CDs. The SACD standard is referred to as the Scarlet Book standard. Titles in the SACD format can be issued as hybrid discs; these discs contain the SACD audio stream as well as a standard audio CD layer which is playable in standard CD players, thus making them backward compatible. CD-MIDI is a format used to store music-performance data, which upon playback is performed by electronic instruments that synthesize the audio. Hence, unlike the original Red Book CD-DA, these recordings are not digitally sampled audio recordings. The CD-MIDI format is defined as an extension of the original Red Book. For the first few years of its existence, the CD was a medium used purely for audio. In 1988, the Yellow Book CD-ROM standard was established by Sony and Philips, which defined a non-volatile optical data computer data storage medium using the same physical format as audio compact discs, readable by a computer with a CD-ROM drive. Video CD (VCD, View CD, and Compact Disc digital video) is a standard digital format for storing video media on a CD. VCDs are playable in dedicated VCD players, most modern DVD-Video players, personal computers, and some video game consoles. The VCD standard was created in 1993 by Sony, Philips, Matsushita, and JVC and is referred to as the White Book standard. Overall picture quality is intended to be comparable to VHS video. Poorly compressed VCD video can sometimes be of lower quality than VHS video, but VCD exhibits block artifacts rather than analog noise and does not deteriorate further with each use. 352×240 (or SIF) resolution was chosen because it is half the vertical and half the horizontal resolution of the NTSC video. 352×288 is a similarly one-quarter PAL/SECAM resolution. This approximates the (overall) resolution of an analog VHS tape, which, although it has double the number of (vertical) scan lines, has a much lower horizontal resolution. Super Video CD (Super Video Compact Disc or SVCD) is a format used for storing video media on standard compact discs. SVCD was intended as a successor to VCD and an alternative to DVD-Video and falls somewhere between both in terms of technical capability and picture quality. SVCD has two-thirds the resolution of DVD, and over 2.7 times the resolution of VCD. One CD-R disc can hold up to 60 minutes of standard-quality SVCD-format video. While no specific limit on SVCD video length is mandated by the specification, one must lower the video bit rate, and therefore quality, to accommodate very long videos. It is usually difficult to fit much more than 100 minutes of video onto one SVCD without incurring a significant quality loss, and many hardware players are unable to play a video with an instantaneous bit rate lower than 300 to 600 kilobits per second. Photo CD is a system designed by Kodak for digitizing and storing photos on a CD. Launched in 1992, the discs were designed to hold nearly 100 high-quality images, scanned prints, and slides using special proprietary encoding. Photo CDs are defined in the Beige Book and conform to the CD-ROM XA and CD-i Bridge specifications as well. They are intended to play on CD-i players, Photo CD players, and any computer with suitable software (irrespective of operating system). The images can also be printed out on photographic paper with a special Kodak machine. This format is not to be confused with Kodak Picture CD, which is a consumer product in CD-ROM format. The Philips Green Book specifies a standard for interactive multimedia compact discs designed for CD-i players (1993). CD-i discs can contain audio tracks that can be played on regular CD players, but CD-i discs are not compatible with most CD-ROM drives and software. The CD-i Ready specification was later created to improve compatibility with audio CD players, and the CD-i Bridge specification was added to create CD-i-compatible discs that can be accessed by regular CD-ROM drives. Philips defined a format similar to CD-i called CD-i Ready, which puts CD-i software and data into the pregap of track 1. This format was supposed to be more compatible with older audio CD players. Enhanced Music CD, also known as CD Extra or CD Plus, is a format that combines audio tracks and data tracks on the same disc by putting audio tracks in a first session and data in a second session. It was developed by Philips and Sony, and it is defined in the Blue Book. VinylDisc is the hybrid of a standard audio CD and the vinyl record. The vinyl layer on the disc's label side can hold approximately three minutes of music. Manufacture, cost, and pricing In 1995, material costs were 30 cents for the jewel case and 10 to 15 cents for the CD. The wholesale cost of CDs was $0.75 to $1.15, while the typical retail price of a prerecorded music CD was $16.98. On average, the store received 35 percent of the retail price, the record company 27 percent, the artist 16 percent, the manufacturer 13 percent, and the distributor 9 percent. When 8-track cartridges, compact cassettes, and CDs were introduced, each was marketed at a higher price than the format they succeeded, even though the cost to produce the media was reduced. This was done because the perceived value increased. This continued from phonograph records to CDs, but was broken when Apple marketed MP3s for $0.99, and albums for $9.99. The incremental cost, though, to produce an MP3 is negligible. Writable compact discs Recordable Compact Discs, CD-Rs, are injection-molded with a blank data spiral. A photosensitive dye is then applied, after which the discs are metalized and lacquer-coated. The write laser of the CD recorder changes the color of the dye to allow the read laser of a standard CD player to see the data, just as it would with a standard stamped disc. The resulting discs can be read by most CD-ROM drives and played in most audio CD players. CD-Rs follow the Orange Book standard. CD-R recordings are designed to be permanent. Over time, the dye's physical characteristics may change causing read errors and data loss until the reading device cannot recover with error correction methods. Errors can be predicted using surface error scanning. The design life is from 20 to 100 years, depending on the quality of the discs, the quality of the writing drive, and storage conditions. Testing has demonstrated such degradation of some discs in as little as 18 months under normal storage conditions. This failure is known as disc rot, for which there are several, mostly environmental, reasons. The recordable audio CD is designed to be used in a consumer audio CD recorder. These consumer audio CD recorders use SCMS (Serial Copy Management System), an early form of digital rights management (DRM), to conform to the AHRA (Audio Home Recording Act). The Recordable Audio CD is typically somewhat more expensive than CD-R due to lower production volume and a 3 percent AHRA royalty used to compensate the music industry for the making of a copy. High-capacity recordable CD is a higher-density recording format that can hold 20% more data than conventional discs. The higher capacity is incompatible with some recorders and recording software. CD-RW is a re-recordable medium that uses a metallic alloy instead of a dye. The write laser, in this case, is used to heat and alter the properties (amorphous vs. crystalline) of the alloy, and hence change its reflectivity. A CD-RW does not have as great a difference in reflectivity as a pressed CD or a CD-R, and so many earlier CD audio players cannot read CD-RW discs, although most later CD audio players and stand-alone DVD players can. CD-RWs follow the Orange Book standard. The ReWritable Audio CD is designed to be used in a consumer audio CD recorder, which will not (without modification) accept standard CD-RW discs. These consumer audio CD recorders use the Serial Copy Management System (SCMS), an early form of digital rights management (DRM), to conform to the United States' Audio Home Recording Act (AHRA). The ReWritable Audio CD is typically somewhat more expensive than CD-R due to (a) lower volume and (b) a 3 percent AHRA royalty used to compensate the music industry for the making of a copy. Copy protection The Red Book audio specification, except for a simple anti-copy statement in the subcode, does not include any copy protection mechanism. Known at least as early as 2001, attempts were made by record companies to market copy-protected non-standard compact discs, which cannot be ripped, or copied, to hard drives or easily converted to other formats (like FLAC, MP3 or Vorbis). One major drawback to these copy-protected discs is that most will not play on either computer CD-ROM drives or some standalone CD players that use CD-ROM mechanisms. Philips has stated that such discs are not permitted to bear the trademarked Compact Disc Digital Audio logo because they violate the Red Book specifications. Numerous copy-protection systems have been countered by readily available, often free, software, or even by simply turning off automatic AutoPlay to prevent the running of the DRM executable program. See also References Further reading Notes External links
========================================
[SOURCE: https://techcrunch.com/events/strictlyvc-2026-events/] | [TOKENS: 603]
StrictlyVC 2026 StrictlyVC events deliver exclusive insider VC content while creating meaningful connections with leading investors & entrepreneurs. Join us for an evening of intimate interviews with industry heavy hitters and impactful conversation. Exclusive VC insights and networking at StrictlyVC Mix and Mingle – Cocktail parties with killer content. StrictlyVC events deliver exclusive insider content from Silicon Valley and the global venture scene. If you’re an investor looking to mingle with your peers and watch some killer content, this event is for you. Get the inside scoop directly from sources you can trust. Previous events included interviews with Sam Altman (OpenAI), Marc Andreessen (Andreessen Horowitz), Katie Haun (Haun Ventures), Hans Tung (GGV Capital), and many more! Past StrictlyVC speakers The StrictlyVC programming is never short of fireside insights from the top voices in the VC and startup world. Sam Altman Co-Founder & CEO OpenAI View Profile Co-Founder & CEO OpenAI TS Anil CEO Monzo Bank View Profile CEO Monzo Bank Baiju Bhatt Founder Aetherflux View Profile Founder Aetherflux Navin Chaddha Managing Partner Mayfield View Profile Managing Partner Mayfield Jay Graber CEO Bluesky Social View Profile CEO Bluesky Social Katie Haun Founder & CEO Haun Ventures View Profile Founder & CEO Haun Ventures Daniel Lurie Mayor of San Francisco City and County of San Francisco View Profile Mayor of San Francisco City and County of San Francisco Kyriakos Mitsotakis Prime Minister of Greece The Hellenic Republic View Profile Prime Minister of Greece The Hellenic Republic Nazo Moosa Managing Director Paladin Capital Group View Profile Managing Director Paladin Capital Group Myrto Papathanou Partner Metavallon VC View Profile Partner Metavallon VC Ethan Thornton CEO and Founder Mach Industries View Profile CEO and Founder Mach Industries Elizabeth Weil Founder & Partner Scribble Ventures View Profile Founder & Partner Scribble Ventures Join the Waitlist Partnership Opportunities TechCrunch offers many ways for partners to engage directly with our attendees before, during, and after the event. If you’re interested in learning more about how your company can be a part the event, get in touch with our sales team below. Upcoming TechCrunch Events Save up to $300 with Super Early Bird rates Save up to $680 until February 27 Join 10,000 founders, VCs, and operators on Oct 13-15 for high-signal networking, real deals, and tactical insights you can use immediately. Register now at the lowest ticket rates of the year. REGISTER NOW Get the latest event announcements, special discounts and other event offers. By submitting your email, you agree to our Terms and Privacy Notice. Partner with TechCrunch TechCrunch offers many ways for partners to engage directly with our attendees before, during, and after the event. Get in touch with us to learn more. Join 1,000+ founders and investors in Boston: REGISTER NOW © 2024 Yahoo.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Control_system] | [TOKENS: 1305]
Contents Control system A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial control systems which are used for controlling processes or machines. The control systems are designed via control engineering process. For continuously modulated control, a feedback controller is used to automatically control a process or operation. The control system compares the value or status of the process variable (PV) being controlled with the desired value or setpoint (SP), and applies the difference as a control signal to bring the process variable output of the plant to the same value as the setpoint. For sequential and combinational logic, software logic, such as in a programmable logic controller, is used.[clarification needed] Open-loop and closed-loop control Fundamentally, there are two types of control loop: open-loop control (feedforward), and closed-loop control (feedback). The definition of a closed loop control system according to the British Standards Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero." Likewise; "A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control." Feedback control systems A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an open-loop controller or non-feedback controller. A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop. In the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine. Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways. Closed-loop controllers have the following advantages over open-loop controllers: In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance. A common closed-loop controller architecture is the PID controller. Logic control Logic control systems for industrial and commercial machinery were historically implemented by interconnected electrical relays and cam timers using ladder logic. Today, most such systems are constructed with microcontrollers or more specialized programmable logic controllers (PLCs). The notation of ladder logic is still in use as a programming method for PLCs. Logic controllers may respond to switches and sensors and can cause the machinery to start and stop various operations through the use of actuators. Logic controllers are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated operations. An automatic sequential control system may trigger a series of mechanical actuators in the correct sequence to perform a task. For example, various electric and pneumatic transducers may fold and glue a cardboard box, fill it with the product and then seal it in an automatic packaging machine. PLC software can be written in many different ways – ladder diagrams, SFC (sequential function charts) or statement lists. On–off control On–off control uses a feedback controller that switches abruptly between two states. A simple bi-metallic domestic thermostat can be described as an on-off controller. When the temperature in the room (PV) goes below the user setting (SP), the heater is switched on. Another example is a pressure switch on an air compressor. When the pressure (PV) drops below the setpoint (SP) the compressor is powered. Refrigerators and vacuum pumps contain similar mechanisms. Simple on–off control systems like these can be cheap and effective. Linear control Linear control are control systems and control theory based on negative feedback for producing a control signal to maintain the controlled process variable (PV) at the desired setpoint (SP). There are several types of linear control systems with different capabilities. Fuzzy logic Fuzzy logic is an attempt to apply the easy design of logic controllers to the control of complex continuously varying systems. Basically, a measurement in a fuzzy logic system can be partly true. The rules of the system are written in natural language and translated into fuzzy logic. For example, the design for a furnace would start with: "If the temperature is too high, reduce the fuel to the furnace. If the temperature is too low, increase the fuel to the furnace." Measurements from the real world (such as the temperature of a furnace) are fuzzified and logic is calculated arithmetic, as opposed to Boolean logic, and the outputs are de-fuzzified to control equipment. When a robust fuzzy design is reduced to a single, quick calculation, it begins to resemble a conventional feedback loop solution and it might appear that the fuzzy design was unnecessary. However, the fuzzy logic paradigm may provide scalability for large control systems where conventional methods become unwieldy or costly to derive.[citation needed] Fuzzy electronics is an electronic technology that uses fuzzy logic instead of the two-value logic more commonly used in digital electronics. Physical implementation The range of control system implementation is from compact controllers often with dedicated software for a particular machine or device, to distributed control systems for industrial process control for a large physical plant. Logic systems and feedback controllers are usually implemented with programmable logic controllers. The Broadly Reconfigurable and Expandable Automation Device (BREAD) is a recent framework that provides many open-source hardware devices which can be connected to create more complex data acquisition and control systems. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/R3000] | [TOKENS: 823]
Contents R3000 The R3000 is a 32-bit RISC microprocessor chipset developed by MIPS Computer Systems that implemented the MIPS I instruction set architecture (ISA). Introduced in June 1988, it was the second MIPS implementation, succeeding the R2000 as the flagship MIPS microprocessor. It operated at 20, 25 and 33.33 MHz. Description The MIPS 1 instruction set is small compared to those of the contemporary 80x86 and 680x0 architectures, encoding only more commonly used operations and supporting few addressing modes. Combined with its fixed instruction length and only three different types of instruction formats, this simplified instruction decoding and processing. It employed a 5-stage instruction pipeline, enabling execution at a rate approaching one instruction per cycle, unusual for its time. The architecture makes use of a branch delay slot. The compilers for the R3000 available from MIPS Computer Systems were typically able to fill the delay slot some 70 to 90 percent of the time. In some military applications, the figure was 75 to 80 percent occupied. This MIPS generation supports up to four co-processors. In addition to the CPU core, the R3000 microprocessor includes a Control Processor (CP), which contains a Translation Lookaside Buffer and a Memory Management Unit. The CP works as a coprocessor. Besides the CP, the R3000 can also support an external R3010 numeric coprocessor, along with two other external coprocessors. The R3000 CPU does not include level 1 cache. Instead, its on-chip cache controller operates external data and instruction caches of up to 256 KB each. It can access both caches during the same clock cycle. The R3000 was a further development of the R2000 with minor improvements including larger TLB and a faster bus to the external caches. The R3000 die contained 115,000 transistors and measured about 75,000 square mils (48 mm2). MIPS was a fabless semiconductor company, so the R3000 was fabricated by MIPS partners including Integrated Device Technology (IDT), LSI Logic, NEC Corporation, Performance Semiconductor, and others. It was fabricated in a 1.2 μm complementary metal–oxide–semiconductor (CMOS) process with two levels of aluminium interconnect. Use in workstations and servers The RISC approach found much success and was quickly used by many companies in their workstations and servers. Those using the R3000 included: Derivatives of the R3000 for non-embedded applications include: Use in real-time systems The MIPS R3000 could be used for real-time computing; indeed, an editor of Computer Design journal characterized the R3000 as "about the cleanest of the RISC processors to implement a real-time operating system". It was possible for embedded implementations of the R3000 to customize the processor in certain ways, such as adding debugging facilities or adding traps on unimplemented features and opcodes. The R3000 was used as an embedded systems microprocessor by a number of companies: A number of these embedded systems were used in defense/avionics applications, and as such by the early 1990s there were a number of Ada programming language cross-compiler implementations available for the R3000. The Joint Integrated Avionics Working Group (JIAWG), a United States government initiative of the late 1980s intended to establish common standards for the next generation of U.S. Air Force, Navy, and Army aircraft, selected the R3000 as one of two 32-bit instruction set architectures for real-time embedded systems applications (the other being the Intel i960). In defense industry uses, the R3000 was often a successor to the 16-bit MIL-STD-1750A architecture. Use in other lower-cost designs Even after advances in technology rendered the R3000 obsolete for high-performance systems, it found continued use in lower-cost designs. Derivatives of the R3000 for embedded applications include: References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-6] | [TOKENS: 4993]
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links
========================================
[SOURCE: https://techcrunch.com/events/strictlyvc-2026-events] | [TOKENS: 603]
StrictlyVC 2026 StrictlyVC events deliver exclusive insider VC content while creating meaningful connections with leading investors & entrepreneurs. Join us for an evening of intimate interviews with industry heavy hitters and impactful conversation. Exclusive VC insights and networking at StrictlyVC Mix and Mingle – Cocktail parties with killer content. StrictlyVC events deliver exclusive insider content from Silicon Valley and the global venture scene. If you’re an investor looking to mingle with your peers and watch some killer content, this event is for you. Get the inside scoop directly from sources you can trust. Previous events included interviews with Sam Altman (OpenAI), Marc Andreessen (Andreessen Horowitz), Katie Haun (Haun Ventures), Hans Tung (GGV Capital), and many more! Past StrictlyVC speakers The StrictlyVC programming is never short of fireside insights from the top voices in the VC and startup world. Sam Altman Co-Founder & CEO OpenAI View Profile Co-Founder & CEO OpenAI TS Anil CEO Monzo Bank View Profile CEO Monzo Bank Baiju Bhatt Founder Aetherflux View Profile Founder Aetherflux Navin Chaddha Managing Partner Mayfield View Profile Managing Partner Mayfield Jay Graber CEO Bluesky Social View Profile CEO Bluesky Social Katie Haun Founder & CEO Haun Ventures View Profile Founder & CEO Haun Ventures Daniel Lurie Mayor of San Francisco City and County of San Francisco View Profile Mayor of San Francisco City and County of San Francisco Kyriakos Mitsotakis Prime Minister of Greece The Hellenic Republic View Profile Prime Minister of Greece The Hellenic Republic Nazo Moosa Managing Director Paladin Capital Group View Profile Managing Director Paladin Capital Group Myrto Papathanou Partner Metavallon VC View Profile Partner Metavallon VC Ethan Thornton CEO and Founder Mach Industries View Profile CEO and Founder Mach Industries Elizabeth Weil Founder & Partner Scribble Ventures View Profile Founder & Partner Scribble Ventures Join the Waitlist Partnership Opportunities TechCrunch offers many ways for partners to engage directly with our attendees before, during, and after the event. If you’re interested in learning more about how your company can be a part the event, get in touch with our sales team below. Upcoming TechCrunch Events Save up to $300 with Super Early Bird rates Save up to $680 until February 27 Join 10,000 founders, VCs, and operators on Oct 13-15 for high-signal networking, real deals, and tactical insights you can use immediately. Register now at the lowest ticket rates of the year. REGISTER NOW Get the latest event announcements, special discounts and other event offers. By submitting your email, you agree to our Terms and Privacy Notice. Partner with TechCrunch TechCrunch offers many ways for partners to engage directly with our attendees before, during, and after the event. Get in touch with us to learn more. Join 1,000+ founders and investors in Boston: REGISTER NOW © 2024 Yahoo.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Remote_control] | [TOKENS: 4447]
Contents Remote control A remote control, also known colloquially as a remote or clicker, is an electronic device used to operate another device from a distance, usually wirelessly. In consumer electronics, a remote control can be used to operate devices such as a television set, DVD player or other digital home media appliance. A remote control can allow operation of devices that are out of convenient reach for direct operation of controls. They function best when used from a short distance. This is primarily a convenience feature for the user. In some cases, remote controls allow a person to operate a device that they otherwise would not be able to reach, as when a garage door opener is triggered from outside. Early television remote controls (1956–1977) used ultrasonic tones. Present-day remote controls are commonly consumer infrared devices which send digitally coded pulses of infrared radiation. They control functions such as power, volume, channels, playback, track change, energy, fan speed, and various other features. Remote controls for these devices are usually small wireless handheld objects with an array of buttons. They are used to adjust various settings such as television channel, track number, and volume. The remote control code, and thus the required remote control device, is usually specific to a product line. However, there are universal remotes, which emulate the remote control made for most major brand devices. Remote controls in the 2000s include Bluetooth or Wi-Fi connectivity, motion sensor-enabled capabilities and voice control. Remote controls for 2010s onward Smart TVs may feature a standalone keyboard on the rear side to facilitate typing, and be usable as a pointing device. History Wired and wireless remote control was developed in the latter half of the 19th century to meet the need to control unmanned vehicles (for the most part military torpedoes). These included a wired version by German engineer Werner von Siemens in 1870, and radio controlled ones by British engineer Ernest Wilson and C. J. Evans (1897) and a prototype that inventor Nikola Tesla demonstrated in New York in 1898. In 1903 Spanish engineer Leonardo Torres Quevedo introduced a radio based control system called the "Telekino" at the Paris Academy of Sciences, which he hoped to use to control a dirigible airship of his own design. Unlike previous "on/off" techniques, the Telekino was able to execute a finite but not limited set of different mechanical actions using a single communication channel. From 1904 to 1906 Torres chose to conduct Telekino testings in the form of a three-wheeled land vehicle with an effective range of 20 to 30 meters, and guiding a manned electrically powered boat, which demonstrated a standoff range of 2 kilometers. The first remote-controlled model airplane flew in 1932,[citation needed] and the use of remote control technology for military purposes was worked on intensively during the Second World War, one result of this being the German Wasserfall missile. By the late 1930s, several radio manufacturers offered remote controls for some of their higher-end models. Most of these were connected to the set being controlled by wires, but the Philco Mystery Control (1939) was a battery-operated low-frequency radio transmitter, thus making it the first wireless remote control for a consumer electronics device. Using pulse-count modulation, this also was the first digital wireless remote control. One of the first remote intended to control a television was developed by Zenith Radio Corporation in 1950. The remote, called Lazy Bones, was connected to the television by a wire. A wireless remote control, the Flash-Matic, was developed in 1955 by Eugene Polley. It worked by shining a beam of light onto one of four photoelectric cells, but the cell did not distinguish between light from the remote and light from other sources. The Flashmatic also had to be pointed very precisely at one of the sensors in order to work. In 1956, Robert Adler developed Zenith Space Command, a wireless remote. It was mechanical and used ultrasound to change the channel and volume. When the user pushed a button on the remote control, it struck a bar and clicked, hence they were commonly called "clickers", and the mechanics were similar to a pluck. Each of the four bars emitted a different fundamental frequency with ultrasonic harmonics, and circuits in the television detected these sounds and interpreted them as channel-up, channel-down, sound-on/off, and power-on/off. Later, the rapid decrease in price of transistors made possible cheaper electronic remotes that contained a piezoelectric crystal that was fed by an oscillating electric current at a frequency near or above the upper threshold of human hearing, though still audible to dogs. The receiver contained a microphone attached to a circuit that was tuned to the same frequency. Some problems with this method were that the receiver could be triggered accidentally by naturally occurring noises or deliberately by metal against glass, for example, and some people could hear the lower ultrasonic harmonics. In 1970, RCA introduced an all-electronic remote control that uses digital signals and metal–oxide–semiconductor field-effect transistor (MOSFET) memory. This was widely adopted for color television, replacing motor-driven tuning controls. The impetus for a more complex type of television remote control came in 1973, with the development of the Ceefax teletext service by the BBC. Most commercial remote controls at that time had a limited number of functions, sometimes as few as three: next channel, previous channel, and volume/off. This type of control did not meet the needs of Teletext sets, where pages were identified with three-digit numbers. A remote control that selects Teletext pages would need buttons for each numeral from zero to nine, as well as other control functions, such as switching from text to picture, and the normal television controls of volume, channel, brightness, color intensity, etc. Early Teletext sets used wired remote controls to select pages, but the continuous use of the remote control required for Teletext quickly indicated the need for a wireless device. So BBC engineers began talks with one or two television manufacturers, which led to early prototypes in around 1977–1978 that could control many more functions. ITT was one of the companies and later gave its name to the ITT protocol of infrared communication. In 1980, the most popular remote control was the Starcom Cable TV Converter (from Jerrold Electronics, a division of General Instrument)[failed verification] which used 40-kHz sound to change channels. Then, a Canadian company, Viewstar, Inc., was formed by engineer Paul Hrivnak and started producing a cable TV converter with an infrared remote control. The product was sold through Philips for approximately $190 CAD. The Viewstar converter was an immediate success, the millionth converter being sold on March 21, 1985, with 1.6 million sold by 1989. The Blab-off was a wired remote control created in 1952 that turned a TV's (television) sound on or off so that viewers could avoid hearing commercials. In the 1980s Steve Wozniak of Apple started a company named CL 9. The purpose of this company was to create a remote control that could operate multiple electronic devices. The CORE unit (Controller Of Remote Equipment) was introduced in the fall of 1987. The advantage to this remote controller was that it could "learn" remote signals from different devices. It had the ability to perform specific or multiple functions at various times with its built-in clock. It was the first remote control that could be linked to a computer and loaded with updated software code as needed. The CORE unit never made a huge impact on the market. It was much too cumbersome for the average user to program, but it received rave reviews from those who could.[citation needed] These obstacles eventually led to the demise of CL 9, but two of its employees continued the business under the name Celadon. This was one of the first computer-controlled learning remote controls on the market. In the 1990s, cars were increasingly sold with electronic remote control door locks. These remotes transmit a signal to the car which locks or unlocks the door locks or unlocks the trunk. An aftermarket device sold in some countries is the remote starter. This enables a car owner to remotely start their car. This feature is most associated with countries with winter climates, where users may wish to run the car for several minutes before they intend to use it, so that the car heater and defrost systems can remove ice and snow from the windows. By the early 2000s, the number of consumer electronic devices in most homes greatly increased, along with the number of remotes to control those devices. According to the Consumer Electronics Association, an average US home has four remotes.[citation needed] To operate a home theater as many as five or six remotes may be required, including one for cable or satellite receiver, VCR or digital video recorder (DVR/PVR), DVD player, TV and audio amplifier. Several of these remotes may need to be used sequentially for some programs or services to work properly. However, as there are no accepted interface guidelines, the process is increasingly cumbersome. One solution used to reduce the number of remotes that have to be used is the universal remote, a remote control that is programmed with the operation codes for most major brands of TVs, DVD players, etc. In the early 2010s, many smartphone manufacturers began incorporating infrared emitters into their devices, thereby enabling their use as universal remotes via an included or downloadable app. Technique The main technology used in home remote controls is infrared (IR) light. The signal between a remote control handset and the device it controls consists of pulses of infrared light, which is invisible to the human eye but can be seen through a digital camera, video camera or phone camera. The transmitter in the remote control handset sends out a stream of pulses of infrared light when the user presses a button on the handset. A transmitter is often a light-emitting diode (LED) which is built into the pointing end of the remote control handset. The infrared light pulses form a pattern unique to that button. The receiver in the device recognizes the pattern and causes the device to respond accordingly. Most remote controls for electronic appliances use a near infrared diode to emit a beam of light that reaches the device. A 940 nm wavelength LED is typical. This infrared light is not visible to the human eye but picked up by sensors on the receiving device. Video cameras see the diode as if it produces visible purple light. With a single channel (single-function, one-button) remote control the presence of a carrier signal can be used to trigger a function. For multi-channel (normal multi-function) remote controls more sophisticated procedures are necessary: one consists of modulating the carrier with signals of different frequencies. After the receiver demodulates the received signal, it applies the appropriate frequency filters to separate the respective signals. One can often hear the signals being modulated on the infrared carrier by operating a remote control in very close proximity to an AM radio not tuned to a station. Today, IR remote controls almost always use a pulse width modulated code, encoded and decoded by a digital computer: a command from a remote control consists of a short train of pulses of carrier-present and carrier-not-present of varying widths.[citation needed] Different manufacturers of infrared remote controls use different protocols to transmit the infrared commands. The RC-5 protocol that has its origins within Philips, uses, for instance, a total of 14 bits for each button press. The bit pattern is modulated onto a carrier frequency that, again, can be different for different manufacturers and standards, in the case of RC-5, the carrier is 36 kHz. Other consumer infrared protocols include the various versions of SIRCS used by Sony, the RC-6 from Philips, the Ruwido R-Step, and the NEC TC101 protocol. Since infrared (IR) remote controls use light, they require line of sight to operate the destination device. The signal can, however, be reflected by mirrors, just like any other light source. If operation is required where no line of sight is possible, for instance when controlling equipment in another room or installed in a cabinet, many brands of IR extenders are available for this on the market. Most of these have an IR receiver, picking up the IR signal and relaying it via radio waves to the remote part, which has an IR transmitter mimicking the original IR control. Infrared receivers also tend to have a more or less limited operating angle, which mainly depends on the optical characteristics of the phototransistor. However, it is easy to increase the operating angle using a matte transparent object in front of the receiver. Radio remote control (RF remote control) is used to control distant objects using a variety of radio signals transmitted by the remote control device. As a complementary method to infrared remote controls, the radio remote control is used with electric garage door or gate openers, automatic barrier systems, burglar alarms and industrial automation systems. Standards used for RF remotes are: Bluetooth AVRCP, Zigbee (RF4CE), Z-Wave. Most remote controls use their own coding, transmitting from 8 to 100 or more pulses, fixed or Rolling code, using OOK or FSK modulation. Also, transmitters or receivers can be universal, meaning they are able to work with many different codings. In this case, the transmitter is normally called a universal remote control duplicator because it is able to copy existing remote controls, while the receiver is called a universal receiver because it works with almost any remote control in the market. A radio remote control system commonly has two parts: transmit and receive. The transmitter part is divided into two parts, the RF remote control and the transmitter module. This allows the transmitter module to be used as a component in a larger application. The transmitter module is small, but users must have detailed knowledge to use it; combined with the RF remote control it is much simpler to use. The receiver is generally one of two types: a super-regenerative receiver or a superheterodyne. The super-regenerative receiver works like that of an intermittent oscillation detection circuit. The superheterodyne works like the one in a radio receiver. The superheterodyne receiver is used because of its stability, high sensitivity and it has relatively good anti-interference ability, a small package and lower price. Usage A remote control is used for controlling substations, pump storage power stations and HVDC-plants. For these systems often PLC-systems working in the longwave range are used. A subset of Power-Line communication that sends remote control signals over energized AC power lines. This was used to remotely control home automation before the invention of WIFI connected smart switches. Garage and gate remote controls, also called clickers or openers, are very common especially in some countries such as the US, Australia, and the UK, where garage doors, gates and barriers are widely used. Such a remote is very simple by design, usually only one button, and some with more buttons to control several gates from one control. Such remotes can be divided into two categories by the encoder type used: fixed code and rolling code. If you find dip-switches in the remote, it is likely to be fixed code, an older technology which was widely used. However, fixed codes have been criticized for their (lack of) security, thus rolling code has been more and more widely used in later installations. Remotely operated torpedoes were demonstrated in the late 19th century in the form of several types of remotely controlled torpedoes. The early 1870s saw remotely controlled torpedoes by John Ericsson (pneumatic), John Louis Lay (electric wire guided), and Victor von Scheliha (electric wire guided). The Brennan torpedo, invented by Louis Brennan in 1877 was powered by two contra-rotating propellers that were spun by rapidly pulling out wires from drums wound inside the torpedo. Differential speed on the wires connected to the shore station allowed the torpedo to be guided to its target, making it "the world's first practical guided missile". In 1898 Nikola Tesla publicly demonstrated a "wireless" radio-controlled torpedo that he hoped to sell to the U.S. Navy. Archibald Low was known as the "father of radio guidance systems" for his pioneering work on guided rockets and planes during the First World War. In 1917, he demonstrated a remote-controlled aircraft to the Royal Flying Corps and in the same year built the first wire-guided rocket. As head of the secret RFC experimental works at Feltham, A. M. Low was the first person to use radio control successfully on an aircraft, an "Aerial Target". It was "piloted" from the ground by future world aerial speed record holder Henry Segrave. Low's systems encoded the command transmissions as a countermeasure to prevent enemy intervention. By 1918 the secret D.C.B. Section of the Royal Navy's Signals School, Portsmouth under the command of Eric Robinson V.C. used a variant of the Aerial Target's radio control system to control from ‘mother’ aircraft different types of naval vessels including a submarine. The military also developed several early remote control vehicles. In World War I, the Imperial German Navy employed FL-boats (Fernlenkboote) against coastal shipping. These were driven by internal combustion engines and controlled remotely from a shore station through several miles of wire wound on a spool on the boat. An aircraft was used to signal directions to the shore station. EMBs carried a high explosive charge in the bow and traveled at speeds of thirty knots. The Soviet Red Army used remotely controlled teletanks during the 1930s in the Winter War against Finland and the early stages of World War II. A teletank is controlled by radio from a control tank at a distance of 500 to 1,500 meters, the two constituting a telemechanical group. The Red Army fielded at least two teletank battalions at the beginning of the Great Patriotic War. There were also remotely controlled cutters and experimental remotely controlled planes in the Red Army. Remote controls in military usage employ jamming and countermeasures against jamming. Jammers are used to disable or sabotage the enemy's use of remote controls. The distances for military remote controls also tend to be much longer, up to intercontinental distance satellite-linked remote controls used by the U.S. for their unmanned airplanes (drones) in Afghanistan, Iraq, and Pakistan. Remote controls are used by insurgents in Iraq and Afghanistan to attack coalition and government troops with roadside improvised explosive devices, and terrorists in Iraq are reported in the media to use modified TV remote controls to detonate bombs. In the winter of 1971, the Soviet Union explored the surface of the Moon with the lunar vehicle Lunokhod 1, the first roving remote-controlled robot to land on another celestial body. Remote control technology is also used in space travel, for instance, the Soviet Lunokhod vehicles were remote-controlled from the ground. Many space exploration rovers can be remotely controlled, though vast distance to a vehicle results in a long time delay between transmission and receipt of a command. Existing infrared remote controls can be used to control PC applications.[citation needed] Any application that supports shortcut keys can be controlled via infrared remote controls from other home devices (TV, VCR, AC).[citation needed] This is widely used[citation needed] with multimedia applications for PC based home theater systems. For this to work, one needs a device that decodes IR remote control data signals and a PC application that communicates to this device connected to PC. A connection can be made via serial port, USB port or motherboard IrDA connector. Such devices are commercially available but can be homemade using low-cost microcontrollers.[citation needed] LIRC (Linux IR Remote control) and WinLIRC (for Windows) are software packages developed for the purpose of controlling PC using TV remote and can be also used for homebrew remote with lesser modification. Remote controls are used in photography, in particular to take long-exposure shots. Many action cameras such as the GoPros as well as standard DSLRs including Sony's Alpha series incorporate Wi-Fi based remote control systems. These can often be accessed and even controlled via cell-phones and other mobile devices. Video game consoles had not used wireless controllers until recently,[when?] mainly because of the difficulty involved in playing the game while keeping the infrared transmitter pointed at the console. Early wireless controllers were cumbersome and when powered on alkaline batteries, lasted only a few hours before they needed replacement. Some wireless controllers were produced by third parties, in most cases using a radio link instead of infrared. Even these were very inconsistent, and in some cases, had transmission delays, making them virtually useless. Some examples include the Double Player for NES, the Master System Remote Control System and the Wireless Dual Shot for the PlayStation. The first official wireless game controller made by a first party manufacturer was the CX-42 for Atari 2600. The Philips CD-i 400 series also came with a remote control, the WaveBird was also produced for the GameCube. In the seventh generation of gaming consoles, wireless controllers became standard. Some wireless controllers, such as those of the PlayStation 3 and Wii, use Bluetooth. Others, like the Xbox 360, use proprietary wireless protocols. Standby power To be turned on by a wireless remote, the controlled appliance must always be partly on, consuming standby power. Alternatives Hand-gesture recognition has been researched as an alternative to remote controls for television sets. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Ben_Gurion_Airport] | [TOKENS: 6102]
Contents Ben Gurion Airport Ben Gurion International Airport[a] (IATA: TLV, ICAO: LLBG), commonly known by the Hebrew-language acronym Natbag (נתב״ג‎), is the main international airport of Israel. Situated on outskirts north of the city of Lod and directly south of the city of Or Yehuda, it is the busiest airport in the country. It is located 45 kilometres (28 mi) to the northwest of Jerusalem and 20 kilometres (12 mi) to the southeast of Tel Aviv. It was known as Lod Airport until 1973, when it was renamed in honour of David Ben-Gurion (1886–1973), the first prime minister of Israel. The airport serves as a hub for El Al, Israir, Arkia, and Sundor, and is managed by the Israel Airports Authority. In 2024, Ben Gurion International Airport handled 14.5 million passengers, making it one of the busiest airports in the Middle East. It is considered to be among the five best airports in the Middle East due to its passenger experience and its high level of security; while it has been the target of several terrorist attacks, no attempt to hijack a plane departing from Ben Gurion Airport has ever succeeded. The airport is of great importance to Israel as it is one of the few convenient entry points into the country for most passengers. As it was Israel's only international airport, it was regarded as a single point of failure, which led to the opening of Ramon Airport in 2019. History The airport began during the British Mandate for Palestine as an airstrip of two unpaved runways on the outskirts of the town of Lydda (now Lod), near the Templer colony of Wilhelma. It was built in 1934, largely at the urging of Airwork Services. The first passenger service at the new airport was the Misr Airwork route Cairo—Lydda—Nicosia, inaugurated on 3 August 1935. Subsequently, Misr flew via Lydda to Haifa and Baghdad. The first continental European airline with a regular service to Lydda was LOT Polish Airlines since 4 April 1937. By that time, Lydda Airport boasted four fully operational concrete runways. Holland's KLM, which had since 1933 stopped at Gaza en route to Batavia, Dutch East Indies (now Jakarta, Indonesia), moved the service to Lydda in 1937. Imperial Airways, too, used Lydda as a refueling stop en route to India. During World War II, Imperial Airways and later British Overseas Airways Corporation continued the service to Lydda until the fall of France in June 1940. When the Japanese military advanced into Burma and Malaya in February 1942, KLM curtailed its route to Batavia and made Lydda the eastern terminus of the route. Misr Airwork, which had suspended flights upon the British declaration of war, resumed the weekly Cairo—Lydda—Nicosia service in May 1940. In 1943, the airport was renamed "RAF Station Lydda" and continued to serve as a major airfield for military air transport and aircraft ferry operations between military bases in Europe, Africa, the Middle East (mainly Iraq and Persia) and South/Southeast Asia. In 1944, as the German threat in the Middle East subsided, Aviron Aviation Company initiated service four times a week between Lydda and Haifa. The first civilian transatlantic route, New York City to Lydda Airport, was inaugurated by TWA in 1946. The British gave up the airport at the end of April 1948. Soldiers of the Israel Defense Forces captured the airport on 10 July 1948, in Operation Danny, transferring control to the newly declared State of Israel. In 1948 the Israelis changed the official name of the airport from "Lod International Airport" to "Tel Aviv-Lod International Airport". Flights resumed on 24 November 1948. That year, 40,000 passengers passed through the terminal. By 1952, the number had risen to 100,000 a month. Within a decade, air traffic increased to the point where local flights had to be redirected to Tel Aviv's other airport, the Sde Dov airfield (SDV) on the city's northern coast. By the mid-1960s, 14 international airlines were landing at the airport. The airport's name was changed from Lod to Ben Gurion International Airport in 1973 to honour Israel's first Prime Minister, David Ben-Gurion, who died that year. While Ben Gurion Airport has been a target of terrorist-organised armed attacks, the adoption of strict security precautions has ensured that no aircraft departing from Ben Gurion airport has ever been hijacked. On the other hand, there have been two major incidents in the airport's history that were a result of foreign airliners taking off from other countries and landing at Ben Gurion Airport. In the first incident, on 8 May 1972, four Black September terrorists hijacked a Sabena flight en route from Vienna and forced it to land at Ben Gurion airport. Sayeret Matkal commandos, including Benjamin Netanyahu, led by Ehud Barak (both future Israeli Prime Ministers) stormed the plane, killing two of the hijackers and capturing the other two. One passenger was killed. Later that month, on 30 May 1972, in an attack known as the Lod Airport massacre, 24 people were killed and 80 injured when three members of the Japanese Red Army sprayed machine gun fire into the passenger arrival area. The victims included Aharon Katzir, a prominent protein biophysicist and brother of Israel's 4th president. Those injured included a group of twenty Puerto Rican tourists who had just arrived in Israel. The only terrorist who survived was Kozo Okamoto, who received a life sentence but was released in 1985 as part of a prisoner exchange with the PFLP-GC. More buildings and runways were added over the years, but with the onset of mass immigration from Ethiopia and the former Soviet Union in the 1980s and 90s, as well as the global increase of international business travel, the existing facilities became painfully inadequate, prompting the design of a new state-of-the-art terminal that could also accommodate the expected tourism influx for the 2000 millennium celebrations. The decision to go ahead with the project was reached in January 1994, but the new terminal, known as Terminal 3, only opened its doors a decade later, on 2 November 2004. During the 2014 conflict with Gaza, several airlines banned their flights to the airport for a couple of days. In October 2023, with the outbreak of the Gaza war, the number of airlines that flew into the airport dropped to just 7. By February 2024, only 45 airlines flew into the airport. The furthest nonstop flight to have departed the airport was a private Airbus A340-500 owned by billionaire casino mogul Sheldon Adelson who flew on 2 January 2017 to Honolulu on a route over the Arctic Ocean. The flight was projected to last 17 hours and 40 minutes. Ramon Airport, an international airport near the southern Israeli city of Eilat, serves as a diversion airport for Ben Gurion Airport. On 4 May 2025, the Houthis launched a ballistic missile which landed within the perimeter of the airport, injuring six people. Magen David Adom said that two others were treated for acute anxiety. A month later, the airport was shut down for eleven days (13 – 24 June) as Israel closed its skies for civil aviation due to the Iran-Israel War. Passenger terminals Before the opening of Terminal 3, Terminal 1 was the main terminal building at Ben Gurion Airport. At that time, the departures check-in area was located on the ground floor. From there, passengers proceeded upstairs to the main departures hall, which contained passport control, duty-free shops, VIP lounges, one synagogue and boarding gates. At the gates, travelers would be required to descend a flight of stairs to return to the ground floor where waiting shuttle buses transported them to airplanes on the tarmac. The arrivals hall with passport control, luggage carousels, duty-free pick-up and customs was located at the south end of the building. The apron buses transferred passengers and crews to and from the terminal to airplanes which were parked on the tarmac over 500 m (1,600 ft) away. After Terminal 3 opened, Terminal 1 was closed except for domestic flights to the airport in Eilat and government flights such as special immigrant flights from North America and Africa. Chartered flights organised by Nefesh B'Nefesh carrying immigrants from North America and England use this terminal for their landing ceremonies several times a year. Although Terminal 1 was closed between 2003 and 2007, the building served as a venue for various events and large-scale exhibitions including the "Bezalel Academy of Arts Centennial Exhibition" which was held there in 2006. The renovations for the terminal were designed by Yosef Assa with three individual atmospheric themes. Firstly, the public halls have a Land-of-Israel character with walls painted in the colors of Israel's Judean, Jerusalem and Galilee mountains. The departure hall is given an atmosphere of vacation and leisure, whilst the arrivals hall is given a more urban theme as passengers return to the city. In February 2006, the Israel Airports Authority announced plans to invest 4.3 million NIS in a new VIP wing for private jet passengers and crews, as well as others interested in avoiding the main terminal. VIP ground services already exist, but a substantial increase in users has justified expanding the facilities, which will also boost airport revenues. The IAA released figures showing significant growth in private jet flights (4,059, a 36.5% increase from 2004) as well as private jet users (14,613, a 46.2% increase from 2004). The new VIP wing, operated by an outside licensee, will be located in an upgraded and expanded section of Terminal 1. All flight procedures (security check, passport control and customs) will be handled here. This wing will include a hall for press conferences, a lounge, meeting rooms and a lounge for flight crews. It was announced in January 2008, however, that the IAA planned to construct a new 1,000-square-metre (11,000 sq ft) VIP terminal next to Terminal 3. Terminal 1 was closed in 2003 and reopened in 2007 as the domestic terminal following extensive renovations, and in July 2008, to cater for summer charter and low-cost flights. It remained open for these charter and low-cost flights for the 2008 summer season then temporarily closed in October 2008, when it underwent further renovation and reopened again in the summer of 2009, when it was expected to reach a three-month capacity of 600,000 passengers on international flights. As of 2010, several low-cost carriers' international flights were operating out of Terminal 1 year-round including Vueling flights to Barcelona and easyJet flights to London (Luton), Manchester, Geneva, and Basel. In 2015, due to increased demand and following another expansion of the terminal, the Israel Airports Authority made Terminal 1 available to all low-cost carriers under certain conditions. Flights operating out of Terminal 1 are charged lower airport fees than those operating out of Terminal 3. Terminal 3, which opened on 28 October 2004, replaced Terminal 1 as the main international gateway to and from Israel. The building was designed by Skidmore, Owings & Merrill (SOM). Moshe Safdie & Associates and TRA (now Black and Veatch) designed a linking structure and the airside departure areas and gates. Ram Karmi and other Israeli architects were the local architects of record. The inaugural flight was an El Al flight to John F. Kennedy International Airport in New York City. Work on Natbag 2000, as the Terminal 3 project was known, was scheduled for completion prior to 2000 in order to handle a massive influx of pilgrims expected for the Millennium celebrations. This deadline was not met due to higher than anticipated costs and a series of work stoppages in the wake of the bankruptcy of the main Turkish contractor. The project eventually cost an estimated one billion US dollars. Due to the proximity of the airport to the country's largest population centres and the problem of noise pollution, another international airport is being considered to be built elsewhere in the country, such as the new Ilan and Assaf Ramon Airport in Southern Israel. The overall layout of Terminal 3 is similar to that of airports in Europe and North America, with multiple levels and considerable distances to walk after disembarking from the aircraft. The walk is assisted by escalators and moving walkways. The upper level departures hall, with an area of over 10,000 m2 (110,000 sq ft), is equipped with 110 check-in counters and as well as flight information display systems. A small shopping mall, known as Buy & Bye, is open to both passangers and the general public. The mall, which includes shops, restaurants and a post office, was planned to be a draw for non-flyers too. On the same level as the mall, passengers enter passport control and the security check. Planes taking off and landing can be viewed from a distinctive tilted glass wall. The arrivals hall is located on the ground floor where there are also 20 additional check-in counters (serving Star Alliance airlines). Car rental counters are located in an intermediate level situated between the departing and arriving passenger halls. Terminal 3 has two synagogues. After the main security check, passengers wait for their flights in the star-shaped duty-free rotunda. A variety of cafes, restaurants and duty-free shops are located there, open 24 hours a day, as well as a synagogue, banking facilities, a transit hall for connecting passengers and a desk for VAT refunds. Terminal 3 has a total of 40 gates divided among four concourses (B, C, D, and E), each with 8 jet bridge-equipped gates (numbered 2 through 9), as well as two stand gates (bus bays 1 and 1A) from which passengers are ferried to aircraft. Two gates in concourse E utilize dual jet bridges for more efficient processing of very large widebody aircraft. Concourses B, C, and D were opened when terminal 3 opened in 2004, while concourse E was completed in 2018. Space exists for one additional concourse (A) at Terminal 3. Free wireless internet is provided throughout the terminal. The terminal has three business lounges—the exclusive El Al King David Lounge for frequent flyers and three Dan lounges for either privileged or paying flyers. In January 2007, the IAA announced plans for a 120-bed hotel to be located about 300 m (980 ft) west of Terminal 3. The tender for the hotel was published by the IAA in late 2017. When the terminal was built, it was said to have a capacity for up to 12 million passengers a year. In 2023, 25 million passengers are expected to pass through Ben Gurion Airport. Terminal 2 was inaugurated in 1969 when Arkia resumed operations at the airport after the Six-Day War. Terminal 2 served domestic flights until 20 February 2007 when these services moved into the refurbished Terminal 1. Due to increased traffic in the late 1990s and over-capacity reached at Terminal 1, an international section was added until Terminal 3 was opened. After the transfer of domestic services to Terminal 1, Terminal 2 was demolished in order to make room for additional air freight handling areas. This terminal, built in 1999, was meant to handle the crowds expected in 2000, but never officially opened. To date, it has only been used as a terminal for passengers arriving from Asia during the SARS epidemic. Another use for the terminal was for the memorial ceremonies upon the arrival of the casket of Col. Ilan Ramon after the Space Shuttle Columbia disaster in February 2003 and the arrival of Elhanan Tannenbaum and the caskets of three Israeli soldiers from Lebanon in January 2004. Development plans In December 2017, the IAA announced a long-term expansion plan for Ben Gurion Airport estimated to cost approximately NIS 9 billion. Plans include further expansion of Terminal 1, a new dedicated domestic flights terminal, a major expansion of Terminal 3's landside terminal which would add approximately 90 additional check-in counters, construction of Concourse A, and additional aircraft parking spaces and ramps. In addition, air cargo facilities would be relocated to a large, currently-unused tract of land in the northern part of the airport's property (north of runway 08/26) where additional aircraft maintenance facilities would also be built. In the meantime, to ease immediate overcrowding problems at Terminal 3's landside terminal, in the spring of 2018 a temporary large, air-conditioned tent was erected adjacent to Terminal 3 housing 25 check-in counters and security screening facilities. This tent was used for compulsory COVID-19 testing for all arriving passengers between 2020 and 2022. In August 2018, the IAA published a tender for the construction and operation of a new terminal, dedicated to handling private and executive aircraft traffic. In late 2021 construction began on a new interchange that will provide additional access to the airport from Highway 1. The new interchange significantly reduced the distance vehicles must travel to access the airport's main terminal from the direction of Tel Aviv and other points north and west of the airport. Office buildings The Airport City development, a large office park, is located east of the main airport property. It is at the junction of the Jerusalem and Tel Aviv metropolitan areas. The head office of El Al is located at Ben Gurion Airport, as is the head office of the Israel Airports Authority. The head offices of the Civil Aviation Authority and Challenge Airlines IL are located in the Airport City office park nearby the airport. Israel Aerospace Industries maintains its head office on airport grounds as well as extensive aviation construction and repair facilities. Runways The closest runway to terminals 1 and 3 is 12/30, 3,112 m (10,210 ft) in length, and is followed by a taxiway. Most landings take place on this runway from West to East, approaching from the Mediterranean Sea over southern Tel Aviv. During inclement weather, it may also be used for takeoffs (Direction 12). A 17 million NIS renovation project was completed in November 2007 which reinforced the runway and made it suitable for future wide-body aircraft. In September 2008, a new ILS serving the runway was activated. The main runway was closed from 2011 until early 2014 in order to accommodate the extension of runway 03/21 and other construction activity in the vicinity of the runway. The longest runway at the airfield, 4,062 m (13,327 ft), and the main take off runway from east to west (direction 08/26), is referred to as "the quiet runway" since jets taking off in this direction produce less noise pollution for surrounding residents.[vague] A 24 million NIS renovation project completed in February 2006 reinforced the runway and made it suitable for wide-body aircraft such as Airbus A380. The original layout of the airfield as designed by the British in the 1930s included four intersecting 800 m (2,600 ft) runways suitable for the piston engined aircraft of the day. However, none of this original layout is visible nowadays since as usage increased and aircraft types and needs changed over the years various runways on the airport's premises were built and removed.[citation needed] The main runway (12/30) is the oldest surviving runway in the airport, with the quiet (08/26) and short (03/21) runways having been built in the late 1960s and 1970s. Since very little commercial traffic could operate on the short runway, for approximately forty years, the airport mostly relied on runways 12/30 and 08/26. This presented a problem, however: the fact that these two runways intersect near their western end creates a crisscross pattern between aircraft landing and taking off. This pattern reduces the number of aircraft which can arrive to and depart from the airport and has detrimental safety implications as well.[citation needed] With passenger traffic projected to increase, plans were drawn in the 1980s and 90s for the extension of runways 03/21 and 08/26 as a means of alleviating some of Ben Gurion's safety and capacity concerns. These plans were approved in 1997 and construction began in 2010. The extension of runway 03/21 allows the airport to operate in an "open V" configuration, allowing for simultaneous landings and take offs on runways 08/26 and 03/21 and thus more than double the number of aircraft movements which can be handled at peak times, while increasing the overall level of air safety in and around the airport. Construction took four years and cost 1 billion NIS (financed from the Israeli Airports Authority budget) and was completed 29 May 2014. It included paving 22 kilometres (14 mi) of runways and taxiways, using more than 1.5 million tons of asphalt, laying one million meters of runway lighting cables, 50,000 metres (160,000 ft) of high-voltage power lines and 10,000 light fixtures. The construction of several new taxiways between the existing runways and terminals also significantly reduced taxi times at the airport. Security procedures Security at Ben Gurion International Airport operates on several levels. All cars, taxis, buses and trucks go through a preliminary security checkpoint before entering the airport compound. Armed guards spot-check the vehicles by looking into cars, taxis and boarding buses, exchanging a few words with the driver and passengers. Armed security personnel stationed at the terminal entrances keep a close watch on those who enter the buildings. If someone arouses their suspicion or looks nervous, they may strike up a conversation to further assess the person's intent. Plainclothes armed personnel patrol the area outside the building, and hidden surveillance cameras operate at all times. Inside the building, both uniformed and plainclothes security officers are on constant patrol. Departing passengers are personally questioned by security agents before arriving at the check-in desk. Until August 2007, there was a system of color codes on checked baggage but the practice was discontinued after complaints of discrimination. In the past, checked bags were screened following the personal interview and before passengers arrived at the check-in desks. Occasionally, if security assessed a person as a low risk, they were passed straight through to the check-in desks, bypassing the main X-ray machines, a practice which also drew some discrimination complaints. This process ceased in April 2014 when the main X-ray machines were removed from the passenger queuing area in Terminal 3 and baggage screening began being performed after the baggage was checked-in by airline representatives (as is common in most airports around the world). Terminal 1 began using the same procedure in the summer of 2017.[citation needed] After check-in, all checked baggage are screened using sophisticated X-ray and CT scanners and put in a pressure chamber to trigger any possible explosive devices which have a trigger dependent on air pressure. Following the check-in process, passengers continue to personal security and passport control. Before passing through the metal detectors and putting carry-on baggage through the X-ray machine at the security checkpoint, passports and boarding passes are re-inspected and additional questions may be asked. Before boarding the aircraft, passports and boarding passes are verified once again. Security procedures for incoming flights are not as stringent, but passengers may be questioned by passport control depending on country of origin, or countries visited prior to arrival in Israel. Passengers who have recently visited Arab countries are subject to further questioning. Airlines and destinations The following airlines serve regular scheduled and charter destinations at Ben Gurion Airport. Most of the airlines have been suspended or have delayed their resumption due to the Israel-Hamas War since 7 October 2023 and the ongoing situation in the Middle East. Statistics Commercial flights from Sde Dov Airport which, until its closure in July 2019, handled more domestic passengers annually than TLV have been moved to Ben Gurion. Ground transportation The airport is located near Highway 1, the main Jerusalem–Tel Aviv Highway and Highway 40. The airport is accessible by car or public bus. Israel Railways operates train service from the airport to several parts of the country and taxi stands are located outside the arrivals building. A popular transportation option is a share taxi van, known in Hebrew as a monit sherut (service cab), going to Jerusalem, Haifa, and Beersheba. Israel has an integrated nationwide public transport payment system covering multiple transit options (train, bus, and light rail) run by various operators using a single payment card: the Rav-Kav. It features flexible tariff arrangements and offers free transfers between transit methods within certain geographical zones and time periods. A public transport information office which also issues Rav-Kav cards is located in the arrivals hall of Terminal 3. With a few exceptions, most public transport options (except for taxis and service cabs) do not operate on the Sabbath (i.e., from early Friday evenings to late Saturday evenings as well as certain Jewish holidays). A new app payment system was introduced in December 2020. The apps have a different, simpler fare system. The two supporting routing and payment are: Cello, Moovit, Pango, and Rav-Pass. Israel Railways operates the Ben Gurion Airport Railway Station, located in the lower level of Terminal 3. From this station passengers may head northwest to Tel Aviv, Haifa and other destinations in the north, or southeast to Modi'in and Jerusalem. The journey to Tel Aviv Savidor Central railway station takes about 18 minutes and to Jerusalem's Yitzhak Navon station about 25 minutes. There is also late night/early morning train service to and from the airport terminating at Beersheba Center via Lod, Ashkelon, and selected destinations in between. Almost 3.3 million passengers used the railway line to and from the airport in 2009. The service does not operate on Shabbat and Jewish holidays but on all other days it runs day and night. The line to Nahariya through Tel Aviv and Haifa operated 24 hours a day on weekdays, but these services were suspended following the COVID-19 pandemic and put on hold until railway electrification works are completed in the mid-2020s, following which the line would run from Jerusalem and terminate at Karmiel instead of Nahariya (though it would continue to service Tel Aviv and Haifa). The airport is served by regular inter-city bus lines, limousine and private shuttle services, Sherut "shared" door to door taxi vans and regular taxis. Afikim bus company provides 24 hours a day, on the hour, direct service to Jerusalem with line 485. the line departs from Terminal 3 on the 2nd floor and passes through Terminal 1. Egged bus number 5 ferries passengers between the terminals and a small bus terminal in the nearby Airport City business park near El Al junction just outside the airport where they can connect to regular Egged bus routes passing through the area. Passengers connecting at Airport City can pay for both rides on the same ticket, not having to pay an extra fare for bus No. 5. Other bus companies directly serve Terminal 3, and the airport also provides a free shuttle bus between terminals. On Shabbat, when there is no train service, a shared shuttle service is available between the airport and Tel Aviv hotels. Located on Highway 1, the Jerusalem – Tel Aviv highway, the airport has a total of approximately 20,000 parking spaces for short and long-term parking. The spaces for long-term parking are situated several kilometres from the terminal, and are reached by a free shuttle bus. Car rental at the airport is available from Avis, Budget, Eldan, Tamir Rental, Thrifty, Hertz, and Shlomo Sixt. Service quality In December 2006, Ben Gurion International Airport ranked first among 40 European airports and 8th out of 77 airports in the world, in a survey, conducted by Airports Council International, to determine the most customer-friendly airport. Tel Aviv placed second in the grouping of airports which carry between 5 and 15 million passengers per year behind Japan's Nagoya Airport. The survey consisted of 34 questions. A random sampling of 350 passengers at the departure gate were asked how satisfied they were with the service, infrastructure and facilities. Ben Gurion received a rating of 3.94 out of 5, followed by Vienna, Munich, Amsterdam, Brussels, Zürich, Copenhagen, and Helsinki. The airport retained its title as the best Middle Eastern airport in the 2007, 2008, and 2009 surveys. Accidents and incidents See also Notes References External links
========================================