text stringlengths 0 473k |
|---|
[SOURCE: https://github.com/security/advanced-security/secret-protection] | [TOKENS: 619] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Keep your secrets secret GitHub Secret Protection continuously monitors your GitHub perimeter, helping prevent exposures, protect credentials, and ship securely. 4.4M Secrets prevented from leaking on GitHub in 2024 150+ Industry partners, working together to mitigate risk for the developer community 39M Secret leaks detected with Secret Protection in 2024 Prevent accidental secret exposureacross your repositories Push protection automatically blocks secrets before they reach your repository, keeping code clean without disrupting workflows. Detect secrets in issues, discussions, and more with secret scanning. Metadata like validity checks and public leaks help prioritize active threats. GitHub Copilot finds elusive secrets like passwords without the false positives. It detects secrets that traditional secret detectors can't catch, providing an additional layer of security. Manage policies like delegated bypass for push protection, alert dismissal restrictions, and built-in enablement configurations, simplifying security enforcement at scale. Powered by a global security partnership Whether you're securing an open source project or strengthening your enterprise codebase, Secret Protection helps you keep secrets out of your code. Resources to get started Take an in-depth look at the current state of application security. Learn how to build security into your code from day one with DevSecOps. Explore common application security pitfalls and how to avoid them. GitHub Secret Protection detects and prevents secret leaks continuously in real-time, proactively blocking sensitive credentials from being pushed to a repository with push protection. With a remarkably low false positive rate and approximately 150 service provider integrations, it enables rapid credential revocation and rotation, enhancing developer productivity. The secret risk assessment provides a free, comprehensive overview of an organization’s secret leak footprint across its GitHub repositories. By analyzing repositories for exposed secrets, it helps admins and developers understand their exposure to potential security risks and offers actionable insights for remediation. Push protection is designed to prevent sensitive information, such as secrets or tokens, from being pushed to your repository in the first place. It proactively scans your code for secrets during the push process and blocks the push if any are detected. Delegated bypass introduces an approval process for developers to bypass push protection. Anyone opting to bypass a push protection block will need to submit a request to a designated group of reviewers, ensuring any risky secrets are not accidentally leaked. Validity checks help you determine whether detected secrets are still active, enabling developers and security teams to prioritize their response effectively. When a secret is flagged, the system verifies its validity to confirm whether the secret is active or inactive. The secret scanning partnership program allows service providers to secure their token formats by enabling GitHub to scan public repositories and npm packages for exposed secrets. When a secret is found in a public repo, GitHub sends an alert directly to the service provider, who can then validate and take appropriate action. Site-wide Links Get tips, technical guides, and best practices. Twice a month. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/French_Academy_of_Sciences] | [TOKENS: 2824] |
Contents French Academy of Sciences The French Academy of Sciences (French: Académie des sciences, [akademi de sjɑ̃s]) is a learned society, founded in 1666 by Louis XIV at the suggestion of Jean-Baptiste Colbert, to encourage and protect the spirit of French scientific research. It was at the forefront of scientific developments in Europe in the 17th and 18th centuries, and is one of the earliest Academies of Sciences. Currently headed by Patrick Flandrin (President of the academy), it is one of the five Academies of the Institut de France. History The Academy of Sciences traces its origin to Colbert's plan to create a general academy. He chose a small group of scholars who met on 22 December 1666 in the King's library, near the present-day Bibliothèque Nationale, and thereafter held twice-weekly working meetings there in the two rooms assigned to the group. The first 30 years of the academy's existence were relatively informal, since no statutes had as yet been laid down for the institution. In contrast to its British counterpart, the academy was founded as an organ of government. In Paris, there were not many membership openings, to fill positions there were contentious elections. The election process was at least a 6-stage process with rules and regulations that allowed for chosen candidates to canvas other members and for current members to consider postponing certain stages of the process if the need would arise. Elections in the early days of the academy were important activities, and as such made up a large part of the proceedings at the academy, with many meetings being held regarding the election to fill a single vacancy within the academy. That is not to say that discussion of candidates and the election process as a whole was relegated to the meetings. Members that belonged to the vacancy's respective field would continue discussion of potential candidates for the vacancy in private. Being elected into the academy did not necessarily guarantee being a full member, in some cases, one would enter the academy as an associate or correspondent before being appointed as a full member of the academy. The election process was originally only to replace members from a specific section. For example, if someone whose study was mathematics was either removed or resigned from his position, the following election process nominated only those whose focus was also mathematics in order to fill that discipline's vacancy. That led to some periods of time in which no specialists for specific fields of study could be found, which left positions in those fields vacant since they could not be filled with people in other disciplines. The needed reform came late in the 20th century, in 1987, when the academy decided against the practice and to begin filling vacancies with people with new disciplines. This reform was not only aimed at further diversifying the disciplines under the academy, but also to help combat the internal aging of the academy itself. The academy was expected to remain apolitical, and to avoid discussion of religious and social issues. On 20 January 1699, Louis XIV gave the Company its first rules. The academy received the name of Royal Academy of Sciences and was installed in the Louvre in Paris. Following this reform, the academy began publishing a volume each year with information on all the work done by its members and obituaries for members who had died. This reform also codified the method by which members of the academy could receive pensions for their work. The academy was originally organized by the royal reform hierarchically into the following groups: Pensionaires, Pupils, Honoraires, and Associés. The reform also added new groups not previously recognized, such as Vétéran. Some of these role's member limits were expanded and some roles even removed or combined throughout the course of academy's history. The Honoraires group establish by this reform in 1699 whose members were directly appointed by the King was recognized until its abolishment in 1793. Membership in the academy exceeded 100 officially-recognised full members only in 1976, 310 years after the academy's inception in 1666. The membership increase came with a large-scale reorganization in 1976. Under this reorganization, 130 resident members, 160 correspondents, and 80 foreign associates could be elected. A vacancy opens only upon the death of members, as they serve for life. During elections, half of the vacancies are reserved for people less than 55 years old. This was created as an attempt to encourage younger members to join the academy. The reorganization also divided the academy into 2 divisions: On 8 August 1793, the National Convention abolished all the academies. On 22 August 1795, a National Institute of Sciences and Arts was put in place, bringing together the old academies of the sciences, literature and arts, among them the Académie française and the Académie des sciences. Also in 1795, The academy determined these 10 titles (first 4 in Division 1 and the others in Division 2) to be their newly accepted branches of scientific study: The last two sections are bundled since there were many good candidates fit to be elected for those practices, and the competition was stiff. Some individuals like François Magendie had made stellar advancements in their selected fields of study, that warranted a possible addition of new fields. However, even someone like Magendie, who had made breakthroughs in physiology and impressed the academy with his hands-on vivisection experiments, could not get his study into its own category.[circular reference] Despite Magendie being one of the leading innovators of his time, it was still a battle for him to become an official member of the academy, a feat he would later accomplish in 1821. He further improved the reverence of the academy when he and anatomist Charles Bell produced the widely known Bell–Magendie law. From 1795 until 1914, the first world war, the French Academy of Science was the most prevalent organization of French science. Almost all the old members of the previously abolished Académie were formally re-elected and retook their ancient seats. Among the exceptions was Dominique, comte de Cassini, who refused to take his seat. Membership in the academy was not restricted to scientists: in 1798 Napoleon Bonaparte was elected a member of the academy and three years later a president in connection with his Egyptian expedition, which had a scientific component. In 1816, the again renamed "Royal Academy of Sciences" became autonomous, while forming part of the Institute of France; the head of State became its patron. In the Second Republic, the name returned to Académie des sciences. During this period, the academy was funded by and accountable to the Ministry of Public Instruction. The academy came to control French patent laws in the course of the eighteenth century, acting as the liaison of artisans' knowledge to the public domain. As a result, academicians dominated technological activities in France. The academy proceedings were published under the name Comptes rendus de l'Académie des Sciences (1835–1965). The Comptes rendus is now a journal series with seven titles. The publications can be found on site of the French National Library. In 1818 the French Academy of Sciences launched a competition to explain the properties of light. The civil engineer Augustin-Jean Fresnel entered the competition by submitting a new wave theory of light. Siméon Denis Poisson, one of the members of the judging committee, studied Fresnel's theory in detail. Being a supporter of the particle-theory of light, he looked for a way to disprove it. Poisson thought that he had found a flaw when he demonstrate that Fresnel's theory predicts that an on-axis bright spot would exist in the shadow of a circular obstacle, where there should be complete darkness according to the particle-theory of light. The Poisson spot is not easily observed in every-day situations and so it was only natural for Poisson to interpret it as an absurd result and that it should disprove Fresnel's theory. However, the head of the committee, Dominique-François-Jean Arago, and who incidentally later became Prime Minister of France, decided to perform the experiment in more detail. He molded a 2-mm metallic disk to a glass plate with wax. To everyone's surprise he succeeded in observing the predicted spot, which convinced most scientists of the wave-nature of light. For three centuries women were not allowed as members of the academy. This meant that many women scientists were excluded, including two-time Nobel Prize winner Marie Curie, Nobel winner Irène Joliot-Curie, mathematician Sophie Germain, and many other deserving women scientists. The first woman admitted as a correspondent member was a student of Curie's, Marguerite Perey, in 1962. The first female full member was Yvonne Choquet-Bruhat in 1979. Membership in the academy is highly geared towards representing common French populace demographics. French population increases and changes in the early 21st century led to the academy expanding reference population sizes by reform in the early 2002. The overwhelming majority of members leave the academy posthumously, with a few exceptions of removals, transfers, and resignations. The last member to be removed from the academy was in 1944. Removal from the academy was often for not performing to standards, not performing at all, leaving the country, or political reasons. In some rare occasions, a member has been elected twice and subsequently removed twice. This is the case for Marie-Adolphe Carnot. Government interference The most direct involvement of the government in the affairs of the institute came in the initial nomination of members in 1795, but as its members nominated constituted only one third of the membership and most of these had previously been elected as members of the respective academies under the old regime, few objections were raised. Moreover, these nominated members were then completely free to nominate the remaining members of the institute. Members expected to remain such for life, but interference occurred in a few cases where the government suddenly terminated membership for political reasons. The other main interference came when the government refused to accept the result of academy elections. The academies control by the government was apparent in 1803, when Bonaparte decided on a general reorganization. His principal concern was not the First class but the Second, which included political scientists who were potential critics of his government. Bonaparte abolished the second class completely and, after a few expulsions, redistributed its remaining members, together with those of the Third class, into a new Second class concerned with literature and a new Third class devoted to the fine arts. Still this relationship between the academy and the government was not a one-way affair, as members expected to receive their payment of an honorarium. Decline Although the academy still exists today, after World War I, the reputation and status of the academy was largely questioned. One factor behind its decline was the development from a meritocracy to gerontocracy: a shift from those with demonstrated scientific ability leading the academy to instead favoring those with seniority. It became known as a sort of "hall of fame" that lost control, real and symbolic, of the professional scientific diversity in France at the time. Another factor was that in the span of five years, 1909 to 1914, funding to science faculties considerably dropped, eventually leading to a financial crisis in France. Present use Today the academy is one of five academies comprising the Institut de France. Its members are elected for life. Currently, there are 150 full members, 300 corresponding members, and 120 foreign associates. They are divided into two scientific groups: the Mathematical and Physical sciences and their applications and the Chemical, Biological, Geological and Medical sciences and their applications. The academy currently has five missions that it pursues. These being the encouraging of the scientific life, promoting the teaching of science, transmitting knowledge between scientific communities, fostering international collaborations, and ensuring a dual role of expertise and advise. The French Academy of Science originally focused its development efforts into creating a true co-development Euro-African program beginning in 1997. Since then they have broadened their scope of action to other regions of the world. The standing committee COPED is in charge of the international development projects undertaken by the French Academy of Science and their associates. The current president of COPED is Pierre Auger, the vice president is Michel Delseny, and the honorary president is Francois Gros. All of which are current members of the French Academy of Science. COPED has hosted several workshops or colloquia in Paris, involving representatives from African academies, universities or research centers, addressing a variety of themes and challenges dealing with African development and covering a large field spectrum. Specifically higher education in sciences, and research practices in basic and applied sciences that deal with various aspects relevant to development (renewable energy, infectious diseases, animal pathologies, food resources, access to safe water, agriculture, urban health, etc.). Current committees and working parties The Academic Standing Committees and Working Parties prepare the advice notes, policy statements and the Academic Reports. Some have a statutory remit, such as the Select Committee, the Committee for International Affairs and the Committee for Scientists' Rights, some are created ad hoc by the academy and approved formally by vote in a members-only session. Today the academies standing committees and working parties include: Medals, awards and prizes Each year, the Academy of Sciences distributes about 80 prizes. These include: People The following are incomplete lists of the officers of the academy. See also Category:Officers of the French Academy of Sciences. For a list of the academy's members past and present, see Category:Members of the French Academy of Sciences Source: French Academy of Sciences Publications See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Codelobster] | [TOKENS: 263] |
Contents Codelobster Codelobster is a portable integrated development environment (IDE) primarily for PHP, which also supports HTML, CSS, and JavaScript development. Plug-ins are available for Drupal, WordPress, Smarty, Joomla, JQuery, Facebook, Codeigniter, Yii, and CakePHP. Free registration by email is required after 30 days of use of the program, and there are paid versions also ("Lite" and "Professional") for additional features. The program is missing a help system as of its latest version. There is also special "PHP edition" distro only for Windows, that was not updated since 2019. The program features syntax highlighting and auto-completion for SQL, PHP, HTML, CSS, JavaScript, and XML, as well as automatic syntax checking. There is an HTML and CSS inspector like Firebug. It also includes Drupal support. All plugins are paid, but they offer trial periods of varying length. Since the activation servers have been shut down, it is no longer possible to activate the program to use the Pro features. Unfortunately, the developers did not provide a pre-activated version before discontinuing the product. There's also no official activation-patch available. References External links This programming-tool-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/19/an-ai-data-center-boom-is-fueling-redwoods-energy-storage-business/] | [TOKENS: 992] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us An AI data center boom is fueling Redwood’s energy storage business A year ago, Redwood Materials didn’t have an energy storage business. Now, it is the fastest-growing unit within the battery recycling and materials startup — a reflection of an AI data center building boom. The evidence of that growth, the company says, can be found at its R&D lab in San Francisco, which has expanded four-fold into a 55,000-square-foot facility and now employs nearly 100 people. Those are small figures compared to Redwood’s total workforce of 1,200 people and its sprawling campus at its Carson City, Nevada headquarters and another facility near Reno. But its value and recent expansion are tied directly to its burgeoning energy storage that launched in June 2025. The San Francisco facility, which opened in April 2025, is where engineers integrate the hardware, software, and power electronics for energy storage systems that power data centers, AI computing, and other large-scale industrial applications. The company said in a blog post Thursday the expansion will support a wave of energy storage deployments related to data centers. The company’s recent $425 million Series E raise will provide the capital needed to scale the business. Google, a new investor, as well as existing backer Nvidia, joined the round to support Redwood’s energy storage business venture. “AI data centers have definitely been a pressing area of focus,” Claire McConnell, vice president of business development told TechCrunch in a recent interview, who added there are other use cases for its systems including supporting renewable projects like solar and wind. Data centers have been around for decades, but advancements in AI have spurred a building spree and a need for reliable electricity. “What data center developers are seeing is something that they hadn’t experienced before,” McConnell said. “When they’re trying to connect to the grid, they are being told it is going to take five-plus years to get that and at the same time, you’re seeing this massive demand to build more data centers and compete in the AI race.” Redwood Materials was founded in 2017 by former Tesla CTO JB Straubel to create a circular supply chain for batteries. It initially focused on recycling scrap from battery production and consumer electronics, which was processed and then sold to customers such as Panasonic. The company also expanded into the battery materials business and today produces cathodes for battery cells. The company opened Redwood Energy last summer to leverage the thousands of EV batteries it has collected as part of its battery-recycling business to provide power to companies. Redwood Energy’s first customer is Crusoe, a startup that, in 2021, Straubel invested in. Redwood set up an energy storage system that uses old EV batteries that are not yet ready for recycling. The system, which generates 12 MW of power and has 63 MWh of capacity, sends power to a modular data center built by Crusoe, a company best known for its large-scale data center campus in Abilene, Texas — the initial site of the Stargate project. McConnell said customers that are in the pipeline include hyperscalers — companies that operate massive cloud computing data centers and consume hundreds of megawatts of power — that would far exceed the capacity of its project with Crusoe. “We’re working on ones in the hundreds of megawatt hours, and we have ones in the pipeline that are multiple gigawatt hours,” she said. Topics Transportation Editor Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/19/new-york-hits-the-brakes-on-robotaxi-expansion-plan/] | [TOKENS: 991] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us New York hits the brakes on robotaxi expansion plan Waymo’s big chance to bring its robotaxis to the state of New York has been thwarted — for now. New York Governor Kathy Hochul withdrew a proposal that would have amended vehicle and traffic laws to effectively legalize robotaxis in the state outside of New York City. Hochul spokesperson Sean Butler confirmed to TechCrunch that the proposal has been pulled. “Based on conversations with stakeholders, including in the legislature, it was clear that the support was not there to advance this proposal,” Butler said in an emailed statement. Bloomberg was the first to report the proposal had been removed. The withdrawal is a setback for Waymo which has tried for years — along with other autonomous vehicle (AV) companies — to test and eventually deploy robotaxis in New York. “We hear from thousands of New Yorkers who have experienced Waymo in other cities and want access to it at home,” Waymo said in a statement emailed to TechCrunch. “They want the safety, privacy, and comfort that riders in other major cities already enjoy. While we are disappointed by the Governor’s decision, we’re committed to bringing our service to New York and will work with the State Legislature to advance this issue.” “The path forward requires a collaborative approach that prioritizes transparency and public safety. We will continue to engage constructively with the Governor, the Legislature, and officials around the state to deliver this proven mobility option that New Yorkers are waiting for,” added Waymo’s statement. Hochul had introduced, as part of her broader budget proposal, a plan to change a state law that mandates drivers keep one hand on the wheel at all times. That law essentially prevents robotaxi companies like Waymo from operating in the state since no human is behind the wheel — if there is a steering wheel at all. Even if Hochul’s proposal had survived, it would not have opened the floodgates to AV companies. The proposal contained a number of limitations, including that AV companies could not deploy for-hire robotaxi services in any city with more than a million people. AV companies would also need approval from the state’s transportation commissioner, pay a $1 million fee, and show proof of financial security of at least $5 million. The state would have only backed robotaxi pilots in cities or townships where there was a clear demonstration of local support, Butler said. With that proposal dead, the state’s existing AV pilot program, which is far more restrictive, is expected to remain. Under the pilot program, companies can seek an exemption to the one-hand on the wheel rule, allowing them to develop and test autonomous vehicles in the state, but not launch commercial robotaxi services. Waymo is currently testing in New York City, and is allowed to do so through March 31. New York City regulators granted a permit last August to Waymo to test its robotaxis in the city. Under that permit, Waymo is allowed to test up to eight of its Jaguar I-Pace vehicles in Manhattan and downtown Brooklyn, as long as there is a human safety operator behind the wheel. Waymo is active in numerous other states and operates commercial robotaxi services in Atlanta, Austin, Miami, Phoenix, Los Angeles, and the San Francisco Bay Area. The company says it provides more than 400,000 paid rides every week and is targeting 1 million weekly rides by the end of the year. Topics Transportation Editor Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-Gomez-JaureguiGutierrez-GarciaGonzález-RedondoIglesiasManchadoOtero2022-21] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PyCharm] | [TOKENS: 517] |
Contents PyCharm PyCharm is an integrated development environment (IDE) used for programming in Python. It provides code analysis, a graphical debugger, an integrated unit tester, integration with version control systems, and supports web development with Django. PyCharm is developed by the Czech company JetBrains and built on their IntelliJ platform. PyCharm is cross-platform, working on Microsoft Windows, macOS, and Linux. Portions of PyCharm's source code are released under the Apache License and available on GitHub, and a subscription is available to gain access to proprietary features. Features History PyCharm was released to the market of the Python-focused IDEs to compete with PyDev (for Eclipse) or the more broadly focused Komodo IDE by ActiveState.[citation needed] The beta version of the product was released in July 2010, with the 1.0 arriving 3 months later. Version 2.0 was released on December 13, 2011, version 3.0 was released on September 24, 2013, and version 4.0 was released on November 19, 2014. PyCharm became open source on October 22, 2013. The open source variant is released under the name Community Edition while the commercial variant, Professional Edition, contains closed-source modules. As of December 2022, JetBrains has discontinued PyCharm Edu and IntelliJ IDEA Edu. The educational functionality is now bundled with the Community and Professional editions of IntelliJ IDEA and PyCharm. Users are encouraged to install the Community or Professional editions and enable educational features through the IDE settings. In April 2025, PyCharm Professional Edition and PyCharm Community Edition were merged into a "unified product", now simply called PyCharm. The new version of PyCharm can be used free of charge, with a licensing fee available to gain access to features previously exclusive to the Professional Edition. Licensing Portions of PyCharm's source code are distributed under the Apache 2 license. The source code is available on GitHub. A Pro subscription can be purchased to gain access to additional features, primarily geared towards a faster workflow and machine learning tools; however, the core IDE can be used free of charge. Limitations The PyCharm Python IDE does not feature a GUI builder for now. While there is no native GUI builder provided within PyCharm, by using PySide6/PyQt6 (the Python bindings to Qt V6), one can gain access to the Qt Widget Designer graphical UI builder. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Eric_Python_IDE] | [TOKENS: 666] |
Contents eric (software) eric is a free integrated development environment (IDE) used for computer programming. Since it is a full featured IDE, it provides by default all necessary tools needed for the writing of code and for the professional management of a software project. eric is written in the programming language Python and its primary use is for developing software written in Python. It is usable for development of any combination of Python 3 or Python 2, Qt 5 or Qt 4 and PyQt 5 or PyQt 4 projects, on Linux, macOS and Microsoft Windows platforms. Characteristics eric is written in Python and uses the PyQt Python bindings for the Qt GUI toolkit. By design, eric acts as a front end for several programs, for example the QScintilla editor widget. The key features of eric 6 are: Prior to the release of eric version 5.5.0, eric version 4 and eric version 5 coexisted and were maintained simultaneously, while eric 4 was the variant for writing software in Python version 2 and eric version 5 was the variant for writing software in Python version 3. With the release of eric version 5.5.0 both variants had been merged into one, so that all versions as of eric version 5.5.0 support writing software in Python 2 as well as in Python 3, making the separate development lanes of eric version 4 and 5 obsolete. Those two separate development lanes are no longer maintained, and the last versions prior to merging them both to 5.5.0 were versions 4.5.25 and 5.4.7. Releases Until 2016, eric used a software versioning scheme with a three-sequence identifier, e.g. 5.0.1. The first sequence represents the major version number which is increased when there are significant jumps in functionality, the second sequence represents the minor number, which is incremented when only some features or significant fixes have been added, and the third sequence is the revision number, which is incremented when minor bugs are fixed or minor features have been added. From late 2016, the version numbers show the year and month of release, e.g. 16.11 for November 2016. eric follows the development philosophy of Release early, release often, following loosely a time-based release schedule. Currently a revision version is released around the first weekend of every month, a minor version is released annually, in most cases approximately between December and February. The following table shows the version history of eric, starting from version 4.0.0. Only major (e.g. 6.0.0) and minor (e.g. 6.1.0) releases are listed; revision releases (e.g. 6.0.1) are omitted. Name Several allusions are made to the British comedy group Monty Python, which the Python programming language is named after. Eric alludes to Eric Idle, a member of the group, as does IDLE, the standard python IDE shipped with most distributions.[failed verification] Criticism The Eric Python IDE does not feature an integrated toolchain for now. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/NetBeans] | [TOKENS: 1364] |
Contents NetBeans NetBeans is an integrated development environment (IDE) for Java. NetBeans allows applications to be developed from a set of modular software components called modules. NetBeans runs on Windows, macOS, Linux and Solaris. In addition to Java development, it has extensions for other languages like PHP, C, C++, HTML5, and JavaScript. Applications based on NetBeans, including the NetBeans IDE, can be extended by third party developers. History NetBeans began in 1996 as Xelfi (word play on Delphi), a Java IDE student project under the guidance of the Faculty of Engineering and Technology at Charles University in Prague. In 1997, Roman Staněk formed a company around the project and produced commercial versions of the NetBeans IDE until it was bought by Sun Microsystems in 1999. Sun open-sourced the NetBeans IDE in June of the following year. Since then, the NetBeans community has continued to grow. In 2010, Sun (and thus NetBeans) was acquired by Oracle Corporation. Under Oracle, NetBeans had to find some synergy with JDeveloper, a freeware IDE that has historically been a product of the company, by 2012 both IDEs were rebuilt around a shared codebase - the NetBeans Platform. In September 2016, Oracle submitted a proposal to donate the NetBeans project to The Apache Software Foundation, stating that it was "opening up the NetBeans governance model to give NetBeans constituents a greater voice in the project's direction and future success through the upcoming release of Java 9 and NetBeans 9 and beyond". The move was endorsed by Java creator James Gosling. The project entered the Apache Incubator in October 2016 and graduated as Apache Software Foundation top level project in 2019. The first available version as Apache top level project was with Apache NetBeans 11.3. NetBeans IDE NetBeans IDE is an open-source integrated development environment. NetBeans IDE supports development of all Java application types (Java SE (including JavaFX), Java ME, web, EJB and mobile applications) out of the box. Among other features are an Ant-based project system, Maven support, refactorings, version control (supporting CVS, Subversion, Git, Mercurial and Clearcase). All the functions of the IDE are provided by modules. Each module provides a well-defined function, such as support for the Java language, editing, or support for the CVS versioning system, and SVN. NetBeans contains all the modules needed for Java development in a single download, allowing the user to start working immediately. Modules also allow NetBeans to be extended. New features, such as support for other programming languages, can be added by installing additional modules. For instance, Sun Studio, Sun Java Studio Enterprise, and Sun Java Studio Creator from Sun Microsystems are all based on the NetBeans IDE. NetBeans IDE is licensed under the Apache License 2.0. Previously, from July 2006 through 2007, it was licensed under Sun's Common Development and Distribution License (CDDL), a license based on the Mozilla Public License (MPL). In October 2007, Sun announced that NetBeans would henceforth be offered under a dual license of the CDDL and the GPL version 2 licenses, with the GPL linking exception for GNU Classpath. Oracle has donated NetBeans Platform and IDE to the Apache Foundation where it underwent incubation and graduated as a top level project in April 2019. Other products In an October 2016 interview with Gabriela Motroc, Oracle Vice President Bill Pataky stated that Oracle has a number of products that depend on NetBeans. Integrated modules These modules are part of the NetBeans IDE: The NetBeans Profiler is a tool for the monitoring of Java applications: It helps developers find memory leaks and optimize speed. Formerly downloaded separately, it is integrated into the core IDE since version 6.0. The Profiler is based on a Sun Laboratories research project that was named JFluid. That research uncovered specific techniques that can be used to lower the overhead of profiling a Java application. One of those techniques is dynamic bytecode instrumentation, which is particularly useful for profiling large Java applications. Using dynamic bytecode instrumentation and additional algorithms, the NetBeans Profiler is able to obtain runtime information on applications that are too large or complex for other profilers. NetBeans also support Profiling Points that let developers profile precise points of execution and measure execution time. Formerly known as project Matisse, the GUI design-tool enables developers to prototype and design Swing GUIs by dragging and positioning GUI components. The GUI builder has built-in support for JSR 295 (Beans Binding technology), but the support for JSR 296 (Swing Application Framework) was removed in 7.1. The NetBeans JavaScript editor provides extended support for JavaScript, Ajax, and CSS. JavaScript editor features comprise syntax highlighting, refactoring, code completion for native objects and functions, generation of JavaScript class skeletons, generation of Ajax callbacks from a template; and automatic browser compatibility checks. CSS editor features comprise code completion for styles names, quick navigation through the navigator panel, displaying the CSS rule declaration in a List View and file structure in a Tree View, sorting the outline view by name, type or declaration order (List & Tree), creating rule declarations (Tree only), refactoring a part of a rule name (Tree only). The NetBeans 7.4 and later uses the new Nashorn JavaScript engine developed by Oracle. NetBeans IDE download bundles Users can choose to download NetBeans IDE bundles tailored to specific development needs. Users can also download and install all other features at a later date directly through the NetBeans IDE. The NetBeans IDE Bundle for Web & Java EE provides complete tools for all the latest Java EE 6 standards, including the new Java EE 6 Web Profile, Enterprise Java Beans (EJBs), servlets, Java Persistence API, web services, and annotations. NetBeans also supports the JSF 2.0 (Facelets), JavaServer Pages (JSP), Hibernate, Spring, and Struts frameworks, and the Java EE 5 and J2EE 1.4 platforms. It includes GlassFish and Apache Tomcat. Some of its features with Java EE include: NetBeans supports PHP since version 5.6. The bundle for PHP includes: Oracle also releases a version of NetBeans that includes all of the features of the above bundles. This bundle includes: Official Ruby support was removed with the release of 7.0. Localization NetBeans IDE is translated into the following languages: Community translations of the IDE are also available in the following languages: See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Wing_IDE] | [TOKENS: 959] |
Contents Wing IDE The Wing Python IDE is a family of integrated development environments (IDEs) from Wingware created specifically for the Python programming language with support for editing, testing, debugging, inspecting/browsing, and error-checking Python code. There are three versions of the IDE, each one focused on different types of users: Wing Pro provides AI-assisted development, local and remote debugging, editing (with multiple key bindings, auto-completion, auto-editing, and multi-selection), source browser and code navigation, code refactoring, import management, error checking, auto-reformatting, unit testing with code coverage, version control, project management, Python environment and package management, single and multi-file search, fine-grained customization, support for Docker and LXC containers, assistance for working with third-party frameworks and tools (such as Django, Flask, Matplotlib, Pandas, Blender, Maya, Unreal Engine, PyQt, wxPython, and others) through Python scripting, and comprehensive documentation. Wing Personal and Wing 101 omit many of these features. All three versions of Wing support installation on Windows, Mac OS X, and Intel and ARM Linux. Free licenses for Wing Pro are available for educational users and unpaid open-source software developers.[citation needed] AI-assisted development The AI assistant, available in Wing Pro only, can be used to write new code, refactor or redesign existing code, and inspect and understand code. Using the assistant, users may: Debugger The debugger can be used to locate and fix bugs, as well as a way to write new code interactively in the live runtime state for which the code is being designed. The level of the debugging support depends on the version used, with each tier of service giving the user more features with which they can debug.[citation needed] Wing 101 supports: Wing Personal adds: Wing Pro adds: Code intelligence The code intelligence features speed up editing, facilitated navigation through code, and inspected code for errors. These features rely both on static analysis of Python code found in the project and on the Python Path and runtime analysis of code whenever the debugger is active or the code is active in the integrated Python Shell. The features available to the user depend on the version being used.[citation needed] Wing 101 provides: Wing Personal adds: Wing Pro adds: Project management Wing's project manager allows developers to set up, manage, and share development configurations. It supports creating projects for existing or new source directories, with optional code retrieval from version control repositories. The IDE facilitates easy creation and configuration of Python environments using virtualenv, pip, uv, Poetry, pipenv, or conda, either locally, on a remote host, or with containers managed by Docker or LXC/LXD. Version control Wing Pro integrates with various version control systems, including Git, Mercurial, Perforce, Subversion, and CVS. It offers features such as status checking, committing, logging, blame/praise/annotate, reverting, resolving, and repository push/pull operations. A difference and merge tool is also available for comparing files or directories and reviewing uncommitted changes. Package management Wing Pro includes an integrated package management tool that simplifies inspecting, adding, removing, and upgrading Python packages in the development environment. It supports pip, uv, Poetry, pipenv, and conda environments. Unit testing Wing Pro supports unit testing by allowing running and debugging of unit tests written for the unittest, pytest, doctest, nose, and Django testing frameworks. It optionally tracks code coverage, to indicate how well code is being tested and to re-run only tests affected by changes to code. Remote development Wing Pro also supports secure development on remote hosts, virtual machines, or containers hosted by Docker, Docker Compose, or LXC/LXD. Code on the remote system may be edited, debugged, tested, and managed from the IDE, as for locally stored files. Remote development also supports externally launched debugging. Other features Other features present in all versions include: Wing Personal adds: Wing Pro adds: History The first public version of Wing was released on the 7th of September of 2000, as 1.0 beta, only for Linux. The first stable version was v1.0 for Linux, released on the 1st of December of 2000. As of March 29, 2004, Archaeopteryx Software Inc began doing business as Wingware. Wing version 4.x and earlier were based on GTK2 and the OS X version required X11. Wing 5 changed to Qt4 via PySide and no longer uses X11 on OS X. Wing 6 moved to Qt5 with PyQt5. Wing 10 uses PyQt6.5. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Kaggle_Notebooks] | [TOKENS: 984] |
Contents Kaggle Kaggle is a data science competition platform and online community for data scientists and machine learning practitioners under Google LLC. Kaggle enables users to find and publish datasets, explore and build models in a web-based data science environment, work with other data scientists and machine learning engineers, and enter competitions to solve data science challenges. History Kaggle was founded by Anthony Goldbloom in April 2010. Jeremy Howard, one of the first Kaggle users, joined in November 2010 and served as the President and Chief Scientist. Also on the team was Nicholas Gruen serving as the founding chair. In 2011, the company raised $12.5 million and Max Levchin became the chairman. On March 8, 2017, Fei-Fei Li, Chief Scientist at Google, announced that Google was acquiring Kaggle. In June 2017, Kaggle surpassed 1 million registered users, and as of October 2023, it has over 15 million users in 194 countries. In 2022, founders Goldbloom and Hamner stepped down from their positions and D. Sculley became the CEO. In February 2023, Kaggle introduced Models, allowing users to discover and use pre-trained models through deep integrations with the rest of Kaggle’s platform. In April 2025, Kaggle partnered with Wikimedia Foundation. Site overview Many machine-learning competitions have been run on Kaggle since the company was founded. Notable competitions include gesture recognition for Microsoft Kinect, making a football AI for Manchester City, coding a trading algorithm for Two Sigma Investments, and improving the search for the Higgs boson at CERN. The competition host prepares the data and a description of the problem; the host may choose whether it's going to be rewarded with money or be unpaid. Participants experiment with different techniques and compete against each other to produce the best models. Work is shared publicly through Kaggle Kernels to achieve a better benchmark and to inspire new ideas. Submissions can be made through Kaggle Kernels, via manual upload or using the Kaggle API. For most competitions, submissions are scored immediately (based on their predictive accuracy relative to a hidden solution file) and summarized on a live leaderboard. After the deadline passes, the competition host pays the prize money in exchange for "a worldwide, perpetual, irrevocable and royalty-free license [...] to use the winning Entry", i.e. the algorithm, software and related intellectual property developed, which is "non-exclusive unless otherwise specified". Alongside its public competitions, Kaggle also offers private competitions, which are limited to Kaggle's top participants. Kaggle offers a free tool for data science teachers to run academic machine-learning competitions. Kaggle also hosts recruiting competitions in which data scientists compete for a chance to interview at leading data science companies like Facebook, Winton Capital, and Walmart. Kaggle's competitions have resulted in successful projects such as furthering HIV research, chess ratings and traffic forecasting. Geoffrey Hinton and George Dahl used deep neural networks to win a competition hosted by Merck.[citation needed] Vlad Mnih (one of Hinton's students) used deep neural networks to win a competition hosted by Adzuna.[citation needed] This resulted in the technique being taken up by others in the Kaggle community. Tianqi Chen from the University of Washington also used Kaggle to show the power of XGBoost, which has since replaced Random Forest as one of the main methods used to win Kaggle competitions.[citation needed] Several academic papers have been published based on findings from Kaggle competitions. A contributor to this is the live leaderboard, which encourages participants to continue innovating beyond existing best practices. The winning methods are frequently written on the Kaggle Winner's Blog. Kaggle has implemented a progression system to recognize and reward users based on their contributions and achievements within the platform. This system consists of five tiers: Novice, Contributor, Expert, Master, and Grandmaster. Each tier is achieved by meeting specific criteria in competitions, datasets, kernels (code-sharing), and discussions. The highest tier, Kaggle Grandmaster, is awarded to users who have ranked at the top of multiple competitions including high ranking in a solo team. As of April 2, 2025, out of 23.29 million Kaggle accounts, 2,973 have achieved Kaggle Master status and 612 have achieved Kaggle Grandmaster status. Kaggle includes a free, browser-based online integrated development environment, called Kaggle Notebooks, designed for data science and machine learning. Users can write and execute code in Python or R, import datasets, use popular libraries, and train models on CPUs, GPUs, or TPUs directly in the cloud. This environment is often used for competition submissions, tutorials, education, and exploratory data analysis. See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/ActivePython] | [TOKENS: 1095] |
Contents ActiveState ActiveState Software Inc. is a Canadian software company that develops and supports tools for managing open source software in enterprise environments. The company provides solutions for automated vulnerability management, container security, and software supply chain security. ActiveState delivers secure containers and language runtimes that development teams can incorporate into their software development lifecycle (SDLC), and its platform is designed to help DevSecOps teams automatically identify, prioritize, and remediate vulnerabilities in open source components, and to manage dependencies across programming languages. As of 2025, ActiveState reports serving approximately 94,000 monthly active users and supporting more than 40 million open source libraries. The company states that its platform is compliant with the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF). ActiveState is privately held and jointly owned by its employees and Vertu Capital, a Canadian private equity firm. History ActiveState Software Inc. was founded in 1997 in Vancouver, British Columbia, Canada, by Dick Hardt, David Ascher, and others, with the goal of adapting open source programming languages for commercial use. The company became known for commercial distributions of languages such as Perl, Python, and Tcl for Windows and enterprise platforms. In 2003, the company was acquired by British security software firm Sophos, a UK-based security software company, and operated as a wholly owned subsidiary. In January 2006, ActiveState was purchased by Pender Financial Group, a Canadian investment company. In early 2021, ActiveState received a strategic investment from Turn/River Capital, a San Francisco-based private equity firm. On November 7, 2023, the company was acquired by Vertu Capital, a Canadian private equity firm, and became jointly owned by Vertu Capital and ActiveState employees. Subsidiaries Products The ActiveState Platform is a cloud-based platform for building, managing, and securing open source software components across multiple programming languages. It automates the process of compiling packages from source code, implements the Supply Chain Levels for Software Artifacts (SLSA) security framework, and provides tools for vulnerability detection, risk prioritization, and remediation. The platform is delivered as a managed service in which ActiveState oversees the building, updating, and maintenance of open source components on behalf of its users. It can generate secure container images, manage dependencies, and create software bills of materials (SBOMs) in SPDX and CycloneDX formats. The managed model is intended to reduce the operational burden on development teams by handling security patching, compliance requirements, and dependency updates centrally. According to the company, the platform supports over 40 million open source components and is used by approximately 94,000 monthly active users. In 2024, ActiveState introduced Secure Containers, a set of prebuilt, zero–known-vulnerability container images for popular programming languages. The images are rebuilt nightly with signed SBOMs and attestations, and critical security vulnerabilities are remediated within seven days. Secure Containers are distributed through Docker Hub and are available for languages including Python, Java, Node.js, Go, .NET Core, Rust, Perl, PHP, and for common utilities such as curl, wget, and bash. A static base image is also available. Customers can request customization of a non-production container for evaluation purposes. The State Tool is a command-line utility included with the ActiveState Platform. It allows users to manage programming language runtimes and dependencies, create and share reproducible development environments, and integrate ActiveState's services into automated build and deployment workflows. The tool is also used for managing Python projects and environments, including creating isolated development setups and handling package installation from source. ActiveState continues to offer commercial distributions of programming languages, which are integrated into the ActiveState Platform. These include: These distributions were among the company's first products and remain part of its core offerings. ActiveState has developed several products that have since been discontinued or transferred: ActiveState confirmed that its Enterprise CI / CD Survey is available for participation by 2020. Based on how businesses commonly utilize CI / CD and how they address software runtime and create issues, the study is part of ActiveState's ongoing initiatives to promote the development of open-source technology. Media presence and industry engagement ActiveState regularly participates in industry media, webinars, and podcasts focused on topics such as software supply chain security, automated vulnerability management, and DevSecOps practices. The company has collaborated with technology media platforms including TechStrong TV, where executives and technical staff have discussed subjects ranging from balancing security with development speed to addressing technical debt. Notable appearances include: ActiveState also publishes content on its own channels, including blogs, videos, and white papers, and has partnered with distributors such as Carahsoft and Aquion to deliver its solutions to government agencies and independent software vendors. Awards and recognition In 2025, ActiveState was recognized by ComponentSource in its annual awards as a "Top Global Innovator" and a bestselling publisher. The company had previously been named a "Top 25 Publisher" by ComponentSource in 2021. ActiveState executives have also received individual recognition. In 2024, Chief Executive Officer Stephen Baker was named among the "Top 50 Software CEOs" by The Software Report. In the same year, Chief Marketing Officer Allyson Barr was included in the publication's list of "Top 50 Women Leaders in Software." References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Vim_(text_editor)] | [TOKENS: 1592] |
Contents Vim (text editor) Vim (/vɪm/ ⓘ; short for vi improved) is a free and open-source text editor. Vim provides both a terminal screen user interface as well as a graphical user interface (called gvim). Vim's documentation describes it as an improved form of the older vi text editor (though it is built from a distinct codebase). In release information, the author originally implied that Vim was an abbreviation for "Vi IMitation", but later, the expansion was changed to "Vi IMproved" because, as described by the author, the functionality had increased beyond that of a clone of vi. Some sources indicate the change happened with v2.0, but conflicting information (including from author) suggests the change happened as early as v2.0[dubious – discuss] and as late as v3.0. Since its original release for the Amiga, Vim has been ported to many environments including Atari MiNT, BeOS, MS-DOS, Windows starting from Windows NT 3.1, OS/2, OS/390, MorphOS, OpenVMS, QNX, RISC OS, Linux, BSD, and Classic Mac OS. Also, Vim is shipped with Apple macOS. Independent ports of Vim are available for Android and iOS. Vim has been and continues to be popular for software development. In 2018, it was voted the most popular editor amongst Linux Journal readers. In 2015, the Stack Overflow developer survey found it to be the third most popular text editor, and, in 2019, the fifth most popular development environment. History In 1988, Bram Moolenaar began work on what would become Vim. He used the codebase for the Stevie editor ported to Amiga (by Tony Andrews et al.) as a starting point. Version 1.14 (completed 2 November 1991) became the first public release. It was distributed via Fish Disk #591 in January 1992. License Vim is released under the Vim license, which includes some charityware clauses that encourage users who enjoy the software to consider donating to children in Uganda. The Vim license is compatible with the GNU General Public License through a special clause allowing distribution of modified copies under the GNU GPL version 2.0 or later. User experience Vim provides a user experience like vi's that integrates keyboard-entered command input with a full-screen editing experience. Same as vi, Vim tends to allow a user with a typical keyboard to keep their fingers on the home row, which can be an advantage for touch typing. Via its GUI mode (called gVim), it presents an interface with more a modern experience including aspects such as menus, toolbars and icons. The full functionality is still expressed through its command line mode. Vim has a built-in help facility accessible via the :help command. The Vim tutorial for beginners, called vimtutor, is usually installed alongside Vim, but is a separate executable and can be run separately. The Vim Users' Manual details Vim's features and can be read from within Vim, or found online. Vim features various special memory entries called registers (not to be confused with hardware or processor registers). When cutting, deleting, copying, or pasting text the user can choose to store the manipulated text in a register. There are 36 general-purpose registers associated with letters and numbers ([a-z0-9]) and a range of special ones that either contain special values (current filename, last command, etc.) or serve a special purpose. Like vi, Vim supports multiple editing modes. Depending on the mode, entered characters are either processed as command input or inserted as text. Vim has 14 modes (7 basic modes and 7 variants): Customization Vim is customizable and extensible, making it attractive to those who want control and flexibility in a text editing environment. Users can execute complex commands with key bindings, which can be customized and extended. The recording feature allows for the creation of macros to automate sequences of keystrokes and call internal or user-defined functions and mappings. Abbreviations, similar to macros and key mappings, facilitate the expansion of short strings of text into longer ones and can also be used to correct mistakes. Vim also features an easy mode for users wanting a simpler user experience. There are many plugins available that extend or add new functionality to Vim. These plugins are usually written in Vim's internal scripting language, vimscript (also known as VimL), but can be written in other languages as well. There are projects bundling together complex scripts and customizations and aimed at turning Vim into a tool for a specific task or adding a major flavour to its behaviour. Examples include Cream, which makes Vim behave like a click-and-type editor, or VimOutliner, which provides a comfortable outliner for users of Unix-like systems. Improvements Vim provides many features beyond what vi provides. Some of Vim's enhancements include completion functions, comparison and merging of files (known as vimdiff), a comprehensive integrated help system, extended regular expressions, scripting languages (both native and through alternative scripting interpreters such as Perl, Python, Ruby, Tcl, etc.) including support for plugins, a graphical user interface (gvim), limited integrated development environment-like features, mouse interaction (both with and without the GUI), folding, editing of compressed or archived files in gzip, bzip2, zip, and tar format and files over network protocols such as SSH, FTP, and HTTP, session state preservation, spell checking, split (horizontal and vertical) and tabbed windows, Unicode and other multi-language support, syntax highlighting, trans-session command, search and cursor position histories, multiple level and branching undo/redo history which can persist across editing sessions, and visual mode.[citation needed] Vim continually saves information to a file[a] that allows for recovering from a crash. Generally, the file extension is ".swp", but if the user tries to open a file when the recovery file already exists, then Vim notifies the user of the condition. If the user confirms to proceed, Vim uses a different extension to form a name for a file that does not exist. The extensions are along the progression: ".swo", ".swn", ".swm", etc. The feature can be disabled. Compatibility Vim provides a vi-compatibility mode that limits its functionality to be similar to that of vi. However, even in compatibility mode, Vim is not entirely compatible with vi as specified by POSIX. For example, Vim does not support vi's open mode. Vim's developers state that it is "very much compatible with Vi". Vim script Vim script (also called Vimscript or VimL) is the scripting language built into Vim. Based on the ex editor language of the original vi editor, early versions of Vim added commands for control flow and function definitions. Since version 7, Vim script also supports more advanced data types such as lists and dictionaries and a simple form of object-oriented programming. Built-in functions such as map() and filter() allow a basic form of functional programming, and Vim script has lambda since version 8.0. Vim script is mostly written in an imperative programming style. Vim macros can contain a sequence of normal-mode commands, but can also invoke ex commands or functions written in Vim script for more complex tasks. Almost all extensions (called plugins or more commonly scripts) of the core Vim functionality are written in Vim script, but plugins can also utilize other languages like Perl, Python, Lua, Ruby, Tcl, or Racket. These plugins can be installed manually, or through a plugin manager such as Vundle, Pathogen, or Vim-Plug. Vim script files are stored as plain text, similarly to other code, and the filename extension is usually .vim. One notable exception to that is Vim's config file, .vimrc. Versions See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Nuitka] | [TOKENS: 489] |
Contents Nuitka Nuitka (pronounced as /njuːtkʌ/) is a source-to-source compiler which compiles Python code to C source code, applying some compile-time optimizations in the process such as constant folding and propagation, built-in call prediction, type inference, and conditional statement execution. Nuitka initially was designed to produce C++ code, but current versions produce C source code using only those features of C11 that are shared by C++03, enabling further compilation to a binary executable format by modern C and C++ compilers including gcc, clang, MinGW, or Microsoft Visual C++. It accepts Python code compatible with several different Python versions (currently supporting versions 2.6, 2.7, and 3.3–3.13) and optionally allows for the creation of standalone programs that do not require Python to be installed on the target computer. Nuitka was discussed at the 2012 EuroPython conference, and serious development began at the end of the same year. It now supports virtually all of the features of the Python language. Additional compile-time optimizations are planned for future releases, including avoiding the use of Python objects for additional variables whose type can be inferred at compile time, particularly when using iterators, which is expected to result in a large performance increase. Limitations Currently it is not possible to cross-compile binaries (e.g. building the executable on Windows and shipping it to macOS). Standalone binaries built using the --standalone command line option include an embedded CPython interpreter to handle aspects of the language that are not determined when the program is compiled and must be interpreted at runtime, such as duck typing, exception handling, and dynamic code execution (the eval function and exec function or statement), along with those Python and native libraries that are needed for execution, leading to rather large file sizes. Nuitka's design heavily relies on the internals of the CPython interpreter, and as a result other implementations of the Python language such as PyPy, Jython, and IronPython cannot be used instead of CPython for the runtime interpreter and library. Usage Nuitka can be installed from the repositories of many Linux distributions. It can also be installed through pip and pip3, respectively. Compilation is done either with nuitka program.py or by calling Python itself and afterwards defining which module to run, which in this case is Nuitka (python -m nuitka program.py). References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Outline_of_the_Python_programming_language#Python_conferences] | [TOKENS: 126] |
Contents Outline of the Python programming language The following outline is provided as an overview of and topical guide to Python: Python is a general-purpose, interpreted, object-oriented, multi-paradigm, and dynamically typed programming language known for its emphasis on code readability and broad standard library. Python was created by Guido van Rossum and first released in 1991. It emphasizes code readability and developer productivity. What type of language is Python? History of Python General Python concepts Issues and limitations Python implementations Python toolchain Notable projects using Python Python development communities Example source code Python publications Python programmers Python conferences Python learning resources See also External links References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_for_S60] | [TOKENS: 336] |
Contents Python for S60 Python for S60, also called PyS60 is a port of the Python programming language for the S60 software platform, originally based on Python 2.2.2 from 2002. The port was developed by Nokia. The final version, PyS60-2.0.0, was released on 11 February 2010. It came with multiple improvements, the most notable of which was an update to a new core based on Python 2.5.4. Release history First released in 2005, PyS60 featured a relatively small set of modules and functions. Version 1.2, the last closed-source release and the second version of PyS60, brought many improvements and was made available on 21 October 2005 on the Nokia Forums. After becoming open-source, PyS60 had the advantage of a strong and dedicated community that actively contributed to improving it. The milestone release was version 1.3.11. The final version that supported the S60 2nd Edition platform, 1.4.5, was released on 3 December 2008. On 24 December 2008, a developer version, 1.9.0, was released. It featured several improvements, the most notable of which was a new core based on Python 2.5.1. The final version, 2.0.0, was released on 11 February 2010. Which core is based on Python 2.5.4. See also References External links This computer-programming-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Recessive] | [TOKENS: 2787] |
Contents Dominance (genetics) In genetics, dominance is the phenomenon of one variant (allele) of a gene on a chromosome masking or overriding the effect of a different variant of the same gene on the other copy of the chromosome. The first variant is termed dominant and the second is called recessive. This state of having two different variants of the same gene on each chromosome is originally caused by a mutation in one of the genes, either new (de novo) or inherited. The terms autosomal dominant or autosomal recessive are used to describe gene variants on non-sex chromosomes (autosomes) and their associated traits, while those on sex chromosomes (allosomes) are termed X-linked dominant, X-linked recessive or Y-linked; these have an inheritance and presentation pattern that depends on the sex of both the parent and the child (see Sex linkage). Since there is only one Y chromosome, Y-linked traits cannot be dominant or recessive. Additionally, there are other forms of dominance, such as incomplete dominance, in which a gene variant has a partial effect compared to when it is present on both chromosomes, and co-dominance, in which different variants on each chromosome both show their associated traits. Dominance is a key concept in Mendelian inheritance and classical genetics. Letters and Punnett squares are used to demonstrate the principles of dominance in teaching, and the upper-case letters are used to denote dominant alleles and lower-case letters are used for recessive alleles. An often quoted example of dominance is the inheritance of seed shape in peas. Peas may be round, associated with allele R, or wrinkled, associated with allele r. In this case, three combinations of alleles (genotypes) are possible: RR, Rr, and rr. The RR (homozygous) individuals have round peas, and the rr (homozygous) individuals have wrinkled peas. In Rr (heterozygous) individuals, the R allele masks the presence of the r allele, so these individuals also have round peas. Thus, allele R is dominant over allele r, and allele r is recessive to allele R. Dominance is not inherent to an allele or its traits (phenotype). It is a strictly relative effect between two alleles of a given gene of any function; one allele can be dominant over a second allele of the same gene, recessive to a third, and co-dominant with a fourth. Additionally, one allele may be dominant for one trait but not others. Dominance differs from epistasis, the phenomenon of an allele of one gene masking the effect of alleles of a different gene. Background Gregor Johann Mendel, "The Father of Genetics", promulgated the idea of dominance in the 1860s. However, it was not widely known until the early twentieth century. Mendel observed that, for a variety of traits of garden peas having to do with the appearance of seeds, seed pods, and plants, there were two discrete phenotypes, such as round versus wrinkled seeds, yellow versus green seeds, red versus white flowers or tall versus short plants. When bred separately, the plants always produced the same phenotypes, generation after generation. However, when lines with different phenotypes were crossed (interbred), one and only one of the parental phenotypes showed up in the offspring (green, round, red, or tall). However, when these hybrid plants were crossed, the offspring plants showed the two original phenotypes, in a characteristic 3:1 ratio, the more common phenotype being that of the parental hybrid plants. Mendel reasoned that each parent in the first cross was a homozygote for different alleles (one parent AA and the other parent aa), that each contributed one allele to the offspring, with the result that all of these hybrids were heterozygotes (Aa), and that one of the two alleles in the hybrid cross dominated expression of the other: A masked a. The final cross between two heterozygotes (Aa X Aa) would produce AA, Aa, and aa offspring in a 1:2:1 genotype ratio with the first two classes showing the (A) phenotype, and the last showing the (a) phenotype, thereby producing the 3:1 phenotype ratio. Mendel did not use the terms gene, allele, phenotype, genotype, homozygote, and heterozygote, all of which were introduced later. He did introduce the notation of capital and lowercase letters for dominant and recessive alleles, respectively, still in use today. In 1928, British population geneticist Ronald Fisher proposed that dominance acted based on natural selection through the contribution of modifier genes. In 1929, American geneticist Sewall Wright responded by stating that dominance is simply a physiological consequence of metabolic pathways and the relative necessity of the gene involved. Types of dominance In complete dominance, the effect of one allele in a heterozygous genotype completely masks the effect of the other. The allele that masks are considered dominant to the other allele, and the masked allele is considered recessive. When we only look at one trait determined by one pair of genes, we call it monohybrid inheritance. If the crossing is done between parents (P-generation, F0-generation) who are homozygote dominant and homozygote recessive, the offspring (F1-generation) will always have the heterozygote genotype and always present the phenotype associated with the dominant gene. However, if the F1-generation is further crossed with the F1-generation (heterozygote crossed with heterozygote) the offspring (F2-generation) will present the phenotype associated with the dominant gene ¾ times. Although heterozygote monohybrid crossing can result in two phenotype variants, it can result in three genotype variants - homozygote dominant, heterozygote and homozygote recessive, respectively. In dihybrid inheritance we look at the inheritance of two pairs of genes simultaneous. Assuming here that the two pairs of genes are located at non-homologous chromosomes, such that they are not coupled genes (see genetic linkage) but instead inherited independently. Consider now the cross between parents (P-generation) of genotypes homozygote dominant and recessive, respectively. The offspring (F1-generation) will always heterozygous and present the phenotype associated with the dominant allele variant. However, when crossing the F1-generation there are four possible phenotypic possibilities and the phenotypical ratio for the F2-generation will always be 9:3:3:1. Incomplete dominance (also called partial dominance, semi-dominance, intermediate inheritance, or occasionally incorrectly co-dominance in reptile genetics) occurs when the phenotype of the heterozygous genotype is distinct from and often intermediate to the phenotypes of the homozygous genotypes. The phenotypic result often appears as a blended form of characteristics in the heterozygous state. For example, the snapdragon flower color is homozygous for either red or white. When the red homozygous flower is paired with the white homozygous flower, the result yields a pink snapdragon flower. The pink snapdragon is the result of incomplete dominance. A similar type of incomplete dominance is found in the four o'clock plant wherein pink color is produced when true-bred parents of white and red flowers are crossed. In quantitative genetics, where phenotypes are measured and treated numerically, if a heterozygote's phenotype is exactly between (numerically) that of the two homozygotes, the phenotype is said to exhibit no dominance at all, i.e. dominance exists only when the heterozygote's phenotype measure lies closer to one homozygote than the other. When plants of the F1 generation are self-pollinated, the phenotypic and genotypic ratio of the F2 generation will be 1:2:1 (Red:Pink:White). Co-dominance occurs when the contributions of both alleles are visible in the phenotype and neither allele masks another. For example, in the ABO blood group system, chemical modifications to a glycoprotein (the H antigen) on the surfaces of blood cells are controlled by three alleles, two of which are co-dominant to each other (IA, IB) and dominant over the recessive i at the ABO locus. The IA and IB alleles produce different modifications. The enzyme coded for by IA adds an N-acetylgalactosamine to a membrane-bound H antigen. The IB enzyme adds a galactose. The i allele produces no modification. Thus the IA and IB alleles are each dominant to i (IAIA and IAi individuals both have type A blood, and IBIB and IBi individuals both have type B blood), but IAIB individuals have both modifications on their blood cells and thus have type AB blood, so the IA and IB alleles are said to be co-dominant. Another example occurs at the locus for the beta-globin component of hemoglobin, where the three molecular phenotypes of HbA/HbA, HbA/HbS, and HbS/HbS are all distinguishable by protein electrophoresis. (The medical condition produced by the heterozygous genotype is called sickle-cell trait and is a milder condition distinguishable from sickle-cell anemia, thus the alleles show incomplete dominance concerning anemia, see above). For most gene loci at the molecular level, both alleles are expressed co-dominantly, because both are transcribed into RNA. Co-dominance, where allelic products co-exist in the phenotype, is different from incomplete dominance, where the quantitative interaction of allele products produces an intermediate phenotype. For example, in co-dominance, a red homozygous flower and a white homozygous flower will produce offspring that have red and white spots. When plants of the F1 generation are self-pollinated, the phenotypic and genotypic ratio of the F2 generation will be 1:2:1 (Red:Spotted:White). These ratios are the same as those for incomplete dominance. Again, this classical terminology is inappropriate – in reality, such cases should not be said to exhibit dominance at all. Relationship to other genetic concepts Dominance can be influenced by various genetic interactions and it is essential to evaluate them when determining phenotypic outcomes. Multiple alleles, epistasis, pleiotropic genes, and polygenic characteristics are some factors that might influence the phenotypic outcome. Although any individual of a diploid organism has at most two different alleles at a given locus, most genes exist in a large number of allelic versions in the population as a whole. This is called polymorphism, and is caused by mutations. Polymorphism can have an effect on the dominance relationship and phenotype, which is observed in the ABO blood group system. The gene responsible for human blood type have three alleles; A, B, and O, and their interactions result in different blood types based on the level of dominance the alleles expresses towards each other. Epistasis is interactions between multiple alleles at different loci. More specifically, epistasis is when one gene can mask the phenotype of a gene at a completely different locus. Therefore, several genes can influence the phenotype expressed. Epistasis is slightly different from dominance in the fact that dominance is an allele-to-allele interaction at one locus while epistasis is a gene-to-gene interaction at different loci. The dominance relationship between alleles involved in epistatic interactions can influence the observed phenotypic ratios in offspring. An example of epistasis can be seen in Labrador retriever coat colors. One gene at one locus codes for the color of hair but another gene at a different locus determines if the color is even deposited in the hair. Recessive epistasis is seen in this example due to recessive alleles for color desposition masking both the dominant black (B) allele and recessive brown (b) allele at the first locus to express a yellow coat in the Labrador retriever. The yellow color comes from no pigment being deposited in the hair shaft. Other examples of epistasis interactions are dominant epistasis and duplicate recessive epistasis. Each type of epistasis is a modification of the dihyrbid ratio of 9:3:3:1. Pleiotropic genes are genes where one single gene affects two or more characteristics. An example of this concept is Marfan Syndrome which is a mutation of the FBN1 gene. The effects this causes are a person's appearance being tall and long limbed. They can also have Scoliosis, Ectopia Lentis, and larger than normal aortas. Pleiotropy shares a relationship with Epistasis. While pleiotropy represents one single gene, epistasis is multiple genes interacting with one another to cause different traits to arise. it is helpful to recognize how Epistasis could affect viewing pleiotropic genes if different traits arise or mask themselves to varying degrees. Polygenic characteristics are those affected by multiple genes at different loci. These different genes interact in a way to produce a quantitative characteristic, which is a characteristic that presents a wide variety phenotypes, such as height in humans. The greater the number of genes that interact to influence this characteristic, the greater the number of different phenotypes possible due to more possible genotypes. Many more characteristics also appear to be affected by more than one gene located on different loci, including diabetes and some autoimmune diseases. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-52] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-22] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Libation#Ancient_Rome] | [TOKENS: 4462] |
Contents Libation A libation is a ritual pouring of a liquid as an offering to a deity or spirit, or in memory of the dead. It was common in many religions of antiquity and continues to be offered in cultures today. Various substances have been used for libations, most commonly wine or other alcoholic drinks, olive oil, honey, and in India, ghee. The vessels used in the ritual, including the patera, often had a significant form which differentiated them from secular vessels. The libation could be poured onto something of religious significance, such as an altar, or into the earth. On the other hand, one or more libations began most meals and occasions when wine was drunk in Greco-Roman and other ancient societies, mostly using normal cups or jugs. Etymology The English word "libation" derives from the Latin libatio, an act of pouring, from the verb libare, "to taste, sip; pour out, make a libation" (Indo-European root *leib-, "pour, make a libation"). Religious practice Ancient Egypt provides some of the earliest and most continuous archaeological and textual evidence for libation practices in the ancient world. Libation was a fundamental ritual practice in ancient Egyptian religion, involving the pouring of liquids as offerings to deities, sacred ancestors, the deceased, living persons who were ritually absent, and the natural environment. Archaeological evidence indicates that libation practices were already established in the Predynastic period, particularly during Naqada I and II (c. 4000–3300 BCE), prior to the widespread use of writing in Egypt. Libations typically consisted of water, beer, wine, milk, or oils and formed a core component of funerary and temple ritual throughout pharaonic history. In Egyptian religious thought, libations functioned to purify sacred space and to sustain the ka, the vital essence of gods and humans, and are among the most frequently depicted ritual acts in Egyptian art, appearing in tombs, temples, and on offering equipment from the Early Dynastic period onward. The Sumerian afterlife was a dark, dreary cavern located deep below the ground. This bleak domain was known as Kur, where the souls were believed to eat nothing but dry dust. Family members would offer a drink to the deceased by ritually pouring libations into the grave through a clay pipe. Libation (Ancient Greek: σπονδή, spondȇ, [spondɛ̌ː]) was a central and vital aspect of ancient Greek religion, and one of the simplest and most common forms of religious practice. It is one of the basic religious acts that define piety in ancient Greece, dating back to the Bronze Age and even prehistoric Greece. Libations were a part of daily life, and the pious might perform them every day in the morning and evening, as well as to begin meals. A libation most often consisted of mixed wine and water, but could also be unmixed wine, honey, oil, water, or milk. The typical form of libation, spondȇ, is the ritualized pouring of wine from a jug or bowl held in the hand. The most common ritual was to pour the liquid from an oinochoē (wine jug) into a phiale, a shallow bowl designed for the purpose. After wine was poured from the phiale, the remainder of the oinochoē's contents was drunk by the celebrant. A libation is poured any time wine is to be drunk, a practice that is recorded as early as the Homeric epics. The etiquette of the symposium required that when the first bowl (krater) of wine was served, a libation was made to Zeus and the Olympian gods. Heroes received a libation from the second krater served, and Zeús Téleios (Ζεύς Tέλειος, lit. "Zeus who Finishes") from the third, which was supposed to be the last. An alternative was to offer a libation from the first bowl to the Agathos Daimon and from the third bowl to Hermes. An individual at the symposium could also make an invocation of and libation to a god of his choice. Libation generally accompanied prayer. The Greeks stood when they prayed, either with their arms uplifted, or in the act of libation with the right arm extended to hold the phiale. In conducting animal sacrifice, wine is poured onto the offering as part of its ritual slaughter and preparation, and then afterwards onto the ash and flames. This scene is commonly depicted in Greek art, which also often shows sacrificers or the gods themselves holding the phiale. The Greek verb spéndō (σπένδω), "pour a libation", also "conclude a pact", derives from the Indo-European root *spend-, "make an offering, perform a rite, engage oneself by a ritual act". The noun is spondȇ (plural spondaí), "libation". In the middle voice, the verb means "enter into an agreement", in the sense that the gods are called to guarantee an action. Blood sacrifice was performed to begin a war; spondaí marked the conclusion of hostilities, and is often thus used in the sense of "armistice, treaty". The formula "We the polis have made libation" was a declaration of peace or the "Truce of God", which was observed also when the various city-states came together for the Panhellenic Games, the Olympic Games, or the festivals of the Eleusinian Mysteries: this form of libation is "bloodless, gentle, irrevocable, and final". Libations poured onto the earth are meant for the dead and for the chthonic gods. In the Book of the Dead in the Odyssey, Odysseus digs an offering pit around which he pours in order honey, wine, and water. For the form of libation called choē (Ancient Greek: χεῦμα, cheuma, "that which is poured"; from Proto-Indo-European *gʰeu-), a larger vessel is tipped over and emptied onto the ground for the chthonic gods, who may also receive spondai. Heroes, who were divinized mortals, might receive blood libations if they had participated in the bloodshed of war, as for instance Brasidas the Spartan. In rituals of caring for the dead at their tombs, libations would include milk and honey. The Libation Bearers is the English title of the center tragedy from the Orestes Trilogy of Aeschylus, in reference to the offerings Electra brings to the tomb of her dead father Agamemnon. Sophocles gives one of the most detailed descriptions of libation in Greek literature in Oedipus at Colonus, performed as atonement in the grove of the Eumenides: First, water is fetched from a freshly flowing spring; cauldrons which stand in the sanctuary are garlanded with wool and filled with water and honey; turning towards the east, the sacrificer tips the vessels towards the west; the olive branches which he has been holding in his hand he now strews on the ground at the place where the earth has drunk in the libation; and with a silent prayer he departs, not looking back. Hero of Alexandria described a mechanism for automating the process by using altar fires to force oil from the cups of two statues.[citation needed] In ancient Roman religion, the libation was a religious act in the form of a liquid offering, most often unmixed wine and perfumed oil. The Roman god Liber Pater ("Father Liber"), later identified with the Greek Dionysus or Bacchus, was the divinity of libamina, "libations", and liba, sacrificial cakes drizzled with honey. In Roman art, the libation is shown performed at a mensa (sacrificial meal table), or tripod. It was the simplest form of sacrifice, and could be a sufficient offering by itself. The introductory rite (praefatio) to an animal sacrifice included an incense and wine libation onto a burning altar. Both emperors and divinities are frequently depicted, especially on coins, pouring libations. Scenes of libation commonly signify the quality of pietas, religious duty or reverence. The libation was part of Roman funeral rites, and may have been the only sacrificial offering at humble funerals. Libations were poured in rituals of caring for the dead (see Parentalia and Caristia), and some tombs were equipped with tubes through which the offerings could be directed to the underground dead. Milk was unusual as a libation at Rome, but was regularly offered to a few deities, particularly those of an archaic nature or those for whom it was a natural complement, such as Rumina, a goddess of birth and childrearing who promoted the flow of breast milk, and Cunina, a tutelary of the cradle. It was offered also to Mercurius Sobrius (the "sober" Mercury), whose cult is well attested in Roman Africa and may have been imported to the city of Rome by an African community. Libations were part of ancient Judaism and are mentioned in the Bible: And Jacob set up a Pillar in the place where he had spoken with him, a Pillar of Stone; and he poured out a drink offering on it, and poured oil on it. — Genesis 35:14 In Isaiah 53:12, Isaiah uses libation as a metaphor when describing the end of the Suffering Servant figure who "poured out his life unto death". Libations of wine were offered at the Jerusalem temple, and a double libation of wine and water was offered during Sukkot, possibly as a rain making ritual. Idolatrous libations were forbidden, along with the Torah's prohibitions on idolatrous sacrifice and worship generally. Libation was part of ancient Egyptian society where it was a drink offering to honor and please the various divinities, sacred ancestors, humans present and humans who are alive but not physically present, as well as the environment. It is suggested that libation originated somewhere in the upper Nile Valley and spread out to other regions of Africa and the world. According to Ayi Kwei Armah, "[t]his legend explains the rise of a propitiatory custom found everywhere on the African continent: libation, the pouring of alcohol or other drinks as offerings to ancestors and divinities." In African cultures and African traditional religions the ritual of pouring libation is an essential ceremonial tradition and a way of giving homage to the ancestors. Ancestors are not only respected in such cultures, but also invited to participate in all public functions (as are also the gods and God). A prayer is offered in the form of libations, calling the ancestors to attend. The ritual is generally performed by an elder. Although water may be used, the drink is typically some traditional wine (e.g. palm wine), and the libation ritual is accompanied by an invitation (and invocation) to the ancestors, gods and God. In the Volta region of Ghana, water with a mixture of corn flour is also used to pour libation.[citation needed] Libation is also commonly recognized as the break within the famous performance of Agbekor, a ritual dance performed in West African cultures. It is also poured during traditional marriage ceremony, when a child is born and funeral ceremony. Traditional Festivals like Asafotu and Homowo of the Ga-Adangbe people of Ghana and Togo. Also during installment of kings, queens, and chiefs, libation is poured.[citation needed] As recently as the 1920s, it was a custom in Lower Nubia for women to go to the graves of relatives every Friday and pour a libation of water into a red bowl at the head of the grave. For widows, it was also once a custom for them to pour a libation of milk on their husband's grave the second day after his death. Similarly, it has been Coptic tradition for women to visit graves and make water libations, both in intervals during the first 40 days after a death, and during a few annual occasions, such as Nayrouz. In the Quechua and Aymara cultures of the South American Andes, it is common to pour a small amount of one's beverage on the ground before drinking as an offering to the Pachamama, or Mother Earth. This especially holds true when drinking Chicha, an alcoholic beverage unique to this part of the world. The libation ritual is commonly called challa and is performed quite often, usually before meals and during celebrations. The sixteenth century writer Bernardino de Sahagún records the Aztec ceremony associated with drinking octli: Libation was done in this manner: when octli was drunk, when they tasted the new octli, when someone had just made octli...he summoned people. He set it out in a vessel before the hearth, along with small cups for drinking. Before having anyone drink, he took up octli with a cup and then poured it before the hearth; he poured the octli in the four directions. And when he had poured the octli then everyone drank it. In Hinduism libation rituals most often involve pouring the offered liquid over a murti or sacred image. Many temple images receive libations from the priests daily. Libations are part of Tarpan and also performed during Pitru Paksha (Fortnight of the ancestors) following the Bhadrapada month of the Hindu calendar, (September–October). In India and Nepal, Lord Shiva (also Vishnu and other deities) is offered abhisheka with water by devotees at many temples when they go visit the temple, and on special occasions elaborately with water, milk, yogurt, ghee, honey, and sugar. In Burmese Buddhism, the water libation ceremony, called yay zet cha (ရေစက်ချ), which involves the ceremonial pouring of water from a vessel of water into a vase, drop by drop, concludes most Buddhist ceremonies, including donation celebrations, shinbyu, and feasts. This ceremonial libation is done to share the accrued merit with all other living beings in all 31 planes of existence. The ceremony has three primary prayers: the confession of faith, the pouring of water, and the sharing of merits. While the water is poured, a confession of faith, called the hsu taung imaya dhammanu (ဆုတောင်း ဣမာယ ဓမ္မာနု), is recited and led by the monks. Then, the merit is distributed by the donors (called ahmya wei အမျှဝေ) by thrice saying the following: (To all those who can hear), we share our merits with all beings(Kya kya thahmya), ahmya ahmya ahmya yu daw mu gya ba gon law((ကြားကြားသမျှ) အမျှ အမျှ အမျှ ယူတော်မူကြပါ ကုန်လော) Afterward, in unison, the participants repeat thrice a declaration of affirmation: thadu (သာဓု, sadhu), Pali for "well done", akin to the Christian use of amen. Afterward, the libated water is poured on soil outside, to return the water to Vasudhara. The earth goddess Vasudhara is invoked to witness these meritorious deeds. Prior to colonial rule, the water libation ceremony was also performed during the crowning of Burmese kings, as part of procedures written in the Raza Thewaka Dipani Kyan, an 1849 text that outlines proper conduct of Burmese kings. Although the offering of water to Vasudhara may have pre-Buddhist roots, this ceremony is believed to have been started by King Bimbisara, who poured the libation of water, to share his merit with his ancestors who had become pretas. This ceremony is also practiced at the end of Thai and Laotian Buddhist rituals to transfer merit, where it is called kruat nam (กรวดน้ำ) and yaat nam respectively. The most traditional Chinese ritual bronze vessel for libations, the jue, has a large pouring lip, and may be regarded as a type of jug rather than a cup. In modern Chinese customs, rice wine or tea is poured in front of an altar or tombstone horizontally from right to left with both hands as an offering to gods and in honour of the deceased. The offering is usually placed on the altar for a while before being offered in libation. In more elaborate ceremonies honouring deities, the libation may be done over the burning paper offerings; whereas for the deceased, the wine is only poured onto the ground. Japanese libations leave the liquid offering on the altar in a suitable vessel, while other portions are drunk by the participants. In Shinto, the practice of libation and the drink offered is called Miki (神酒), lit. "The Liquor of the Gods". At a ceremony at a Shinto shrine, it is usually done with sake, but at a household shrine, one may substitute fresh water which can be changed every morning. It is served in a white porcelain or metal cup without any decoration. Among the Ainu, libations are offered by means of the ikupasuy, a carved wooden implement with a "tongue", the pointed end from which millet beer or sake is dripped upon the venerated object. Shamanism among Siberian peoples exhibits the great diversity characteristic of shamanism in general. Among several peoples near the Altai Mountains, the new drum of a shaman must go through a special ritual. This is regarded as "enlivening the drum": the tree and the deer who gave their wood and skin for the new drum narrate their whole lives and promise to the shaman that they will serve him. The ritual itself is a libation: beer is poured onto the skin and wood of the drum, and these materials "come to life" and speak with the voice of the shaman in the name of the tree and the deer. Among the Tubalar, moreover, the shaman imitates the voice of the animal, and its behaviour as well. Modern customs In Cuba, a widespread custom is to spill a drop or two of rum from one's glass while saying "para los santos" ('for the Saints'). An identical practice is found in Brazil when cachaça is drunk, with the drops being offered "para o santo" or "para o santinho". These customs are similar to the practice among Visayans of Mindanao, the Philippines, where rum is spilled upon opening of the bottle, accompanied by "para sa yawa" ('for the Devil'). In Russia and some parts of the Commonwealth of Independent States, there is a tradition of pouring vodka onto a grave, an act possibly connected with dziady custom. In Georgia, where wine plays a more culturally significant role, it is common to pour a glass of wine on graves, especially around Easter in commemoration of all deceased. In the contemporary United States, libations are occasionally offered in the name of a deceased person on various occasions, usually when drinking socially among friends in a private setting. There is also a tradition of pouring libations of malt liquor before drinking, which is particularly associated with African American rappers. This is referred to as "tipping" to one's deceased friends, or "pouring one out". This practice has been recorded in film, such as Boyz n the Hood, and referenced in various songs, such as the 1993 "Gangsta Lean (This Is For My Homies)" by DRS ("I tip my 40 to your memory") and the 1994 "Pour Out a Little Liquor" by 2Pac. As with similar practices worldwide, various symbolic sayings accompany the pouring. In Rabbinic Judaism, drops of wine are taken from one's glass at the Passover Seder by pouring them out or dipping one's finger into the glass, either 10 for each plague, or 16; ten for the ten plagues, three for "Blood, Fire and Columns of Smoke", and three for "Detzach, Adash, B’achav". Explanations vary, but the common one is regret that the freeing of the Jewish people came at the cost of many Egyptians suffering and dying, and out of respect to "not rejoice the downfall of an enemy". However, this is a more modern interpretation originally created by Rabbi Yirmiyahu Löw's grandfather, sometime in the late 18th or early 19th century, though with precedent from Sanhendrin 39b:5. See also Notes External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Middle_East#cite_note-FOOTNOTEAdelson199526-23] | [TOKENS: 6152] |
Contents Middle East The Middle East[b] is a geopolitical region encompassing the Arabian Peninsula, Egypt, Iran, Iraq, the Levant, and Turkey. The term came into widespread usage by Western European nations in the early 20th century as a replacement of the term Near East (both were in contrast to the Far East). The term "Middle East" has led to some confusion over its changing definitions. Since the late 20th century, it has been criticized as being too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus. It also includes all of Egypt (not just the Sinai region) and all of Turkey (including East Thrace). Most Middle Eastern countries (13 out of 18) are part of the Arab world. The three most populous countries in the region are Egypt, Iran, and Turkey, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, and it was long considered the "cradle of civilization". The geopolitical importance of the region has been recognized and competed for during millennia. The Abrahamic religions (Judaism, Christianity, and Islam) have their origins in the Middle East. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Jews, and Assyrians. The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization; multiple regions of the world have since been classified as also having developed independent, original civilizations. Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum. Monarchs of the Arabian Peninsula in particular have benefitted economically from petroleum exports. Because of the arid climate and dependence on the fossil fuel industry, the Middle East is both a major contributor to climate change and a region that is expected to be severely adversely affected by it. Other concepts of the region exist, including the broader Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan. The term the "Greater Middle East" also includes Afghanistan, Mauritania, Pakistan, as well as parts of East Africa, and sometimes Central Asia and the South Caucasus. Terminology The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when United States naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian empires were vying for influence in Central Asia, a rivalry that would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East. He said that, beyond Egypt's Suez Canal, the Gulf was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal. The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf. Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term. Until World War II, it was customary to refer to areas centered on Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, India and Japan. The Middle East was then defined as the area from Mesopotamia to Burma; namely, the area between the Near East and the Far East. This area broadly corresponds to South Asia. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States. Following World War II, for example, the Middle East Institute was founded in Washington, D.C. in 1946. The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner. While non-Eurocentric terms such as "Southwest Asia" or "Swasia" have been sparsely used, the classification of the African country, Egypt, among those counted in the Middle East challenges the usefulness of using such terms. The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan, and Korea). With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the emerging independent countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history. In their usage, the term describes an area identical to the term Middle East, which is not used by these disciplines (see ancient Near East).[citation needed] The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar. Since the late 20th century, scholars and journalists from the region, such as journalist Louay Khraish and historian Hassan Hanafi have criticized the use of "Middle East" as a Eurocentric and colonialist term. The Associated Press Stylebook of 2004 says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs: Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred. European languages have adopted terms similar to Near East and Middle East. Since these are based on a relative description, the meanings depend on the country and are generally different from the English terms. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning). In the four Slavic languages, Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (terms meaning Near East are the only appropriate ones for the region). However, some European languages do have "Middle East" equivalents, such as French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, Greek is Μέση Ανατολή (Mesi Anatoli), and Italian Medio Oriente.[c] Perhaps because of the political influence of the United States and Europe, and the prominence of Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press. It comprises the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, countries of the Middle East that use languages other than Arabic also use that term in translation. For instance, the Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), and the Turkish is Orta Doğu. Countries and territory Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory. Various concepts are often paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. These are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included. "Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included. History The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea (see also: Indo-Mediterranean). It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa. Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup. The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions. From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million. From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty. The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the Allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards. In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries. During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Economy Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.06 trillion), Turkey ($1.03 trillion), and Israel ($0.54 trillion). For nominal GDP per person, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.6 trillion), Saudi Arabia ($2.3 trillion), and Iran ($1.7 trillion) had the largest economies in terms of GDP PPP. For GDP PPP per person, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573). The economic structure of Middle Eastern nations are different because while some are heavily dependent on export of only oil and oil-related products (Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is an important sector, especially for UAE and Bahrain. With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions. Since the end of the COVID pandemic however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies. Unemployment is high in the Middle East and North Africa region, particularly among people aged 15–29, a demographic representing 30% of the region's population. The total regional unemployment rate in 2025 is 10.8%, and among youth is as high as 28%. Demographics Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs. "Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states." According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America. Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics. A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks.[citation needed] Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979. The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 78% of Cyprus population, and 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects. The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Modern Hebrew and Greek. About 20 minority languages are also spoken in the Middle East. Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken in Yemen and Oman. Another Semitic language is Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans, with Western Aramaic still spoken in two villages near Damascus, Syria. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic sister language. Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others. The close third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran. The fourth-most widely spoken language, Kurdish, is spoken in the countries of Iran, Iraq, Syria and Turkey, Sorani Kurdish is the second official language in Iraq (instated after the 2005 constitution) after Arabic. Hebrew is the official language of Israel, with Arabic given a special status after the 2018 Basic law lowered its status from an official language prior to 2018. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic. Modern Hebrew only began being spoken in the 20th century after being revived in the late 19th century by Elizer Ben-Yehuda (Elizer Perlman) and European Jewish settlers, with the first native Hebrew speaker being born in 1882. Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior. English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a foreign second language, in countries such as Egypt, Jordan, Iran, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there. French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is used by the Franco-Maltese diaspora in Egypt. Due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews in Israel. Armenian speakers are to be found in the region. Georgian is spoken by the Georgian diaspora. Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well. The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995[update] Romanian is spoken by 5% of the population.[d] Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants. Culture The Middle East has recently become more prominent in hosting global sport events due to its wealth and desire to diversify its economy. The South Asian diaspora is a major backer of cricket in the region. See also Notes References Further reading External links 29°N 41°E / 29°N 41°E / 29; 41 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Clique_(graph_theory)] | [TOKENS: 1363] |
Contents Clique (graph theory) In graph theory, a clique (/ˈkliːk/ or /ˈklɪk/) is a subset of vertices of an undirected graph such that every two distinct vertices in the clique are adjacent. That is, a clique of a graph G {\displaystyle G} is an induced subgraph of G {\displaystyle G} that is complete. Cliques are one of the basic concepts of graph theory and are used in many other mathematical problems and constructions on graphs. Cliques have also been studied in computer science: the task of finding whether there is a clique of a given size in a graph (the clique problem) is NP-complete, but despite this hardness result, many algorithms for finding cliques have been studied. Although the study of complete subgraphs goes back at least to the graph-theoretic reformulation of Ramsey theory by Erdős & Szekeres (1935), the term clique comes from Luce & Perry (1949), who used complete subgraphs in social networks to model cliques of people; that is, groups of people all of whom know each other. Cliques have many other applications in the sciences and particularly in bioinformatics. Definitions A clique, C, in an undirected graph G = (V, E) is a subset of the vertices, C ⊆ V, such that every two distinct vertices are adjacent. This is equivalent to the condition that the induced subgraph of G induced by C is a complete graph. In some cases, the term clique may also refer to the subgraph directly. A maximal clique is a clique that is not a subset of any larger clique. Some authors define cliques in a way that requires them to be maximal, and use other terminology for complete subgraphs that are not maximal. A maximum clique of a graph, G, is a clique, such that there is no clique with more vertices. Moreover, the clique number ω(G) of a graph G is the number of vertices in a maximum clique in G. The intersection number of G is the smallest number of cliques that together cover all edges of G. The clique cover number of a graph G is the smallest number of cliques of G whose union covers the set of vertices V of the graph. A maximum clique transversal of a graph is a subset of vertices with the property that each maximum clique of the graph contains at least one vertex in the subset. The opposite of a clique is an independent set, in the sense that every clique corresponds to an independent set in the complement graph. The clique cover problem concerns finding as few cliques as possible that include every vertex in the graph. A related concept is a biclique, a complete bipartite subgraph. The bipartite dimension of a graph is the minimum number of bicliques needed to cover all the edges of the graph. Mathematics Mathematical results concerning cliques include the following. Several important classes of graphs may be defined or characterized by their cliques: Additionally, many other mathematical constructions involve cliques in graphs. Among them, Closely related concepts to complete subgraphs are subdivisions of complete graphs and complete graph minors. In particular, Kuratowski's theorem and Wagner's theorem characterize planar graphs by forbidden complete and complete bipartite subdivisions and minors, respectively. Computer science In computer science, the clique problem is the computational problem of finding a maximum clique, or all cliques, in a given graph. It is NP-complete, one of Karp's 21 NP-complete problems. It is also fixed-parameter intractable, and hard to approximate. Nevertheless, many algorithms for computing cliques have been developed, either running in exponential time (such as the Bron–Kerbosch algorithm) or specialized to graph families such as planar graphs or perfect graphs for which the problem can be solved in polynomial time. Applications The word "clique", in its graph-theoretic usage, arose from the work of Luce & Perry (1949), who used complete subgraphs to model cliques (groups of people who all know each other) in social networks. The same definition was used by Festinger (1949) in an article using less technical terms. Both works deal with uncovering cliques in a social network using matrices. For continued efforts to model social cliques graph-theoretically, see e.g. Alba (1973), Peay (1974), and Doreian & Woodard (1994). Many different problems from bioinformatics have been modeled using cliques. For instance, Ben-Dor, Shamir & Yakhini (1999) model the problem of clustering gene expression data as one of finding the minimum number of changes needed to transform a graph describing the data into a graph formed as the disjoint union of cliques; Tanay, Sharan & Shamir (2002) discuss a similar biclustering problem for expression data in which the clusters are required to be cliques. Sugihara (1984) uses cliques to model ecological niches in food webs. Day & Sankoff (1986) describe the problem of inferring evolutionary trees as one of finding maximum cliques in a graph that has as its vertices characteristics of the species, where two vertices share an edge if there exists a perfect phylogeny combining those two characters. Samudrala & Moult (1998) model protein structure prediction as a problem of finding cliques in a graph whose vertices represent positions of subunits of the protein. And by searching for cliques in a protein–protein interaction network, Spirin & Mirny (2003) found clusters of proteins that interact closely with each other and have few interactions with proteins outside the cluster. Power graph analysis is a method for simplifying complex biological networks by finding cliques and related structures in these networks. In electrical engineering, Prihar (1956) uses cliques to analyze communications networks, and Paull & Unger (1959) use them to design efficient circuits for computing partially specified Boolean functions. Cliques have also been used in automatic test pattern generation: a large clique in an incompatibility graph of possible faults provides a lower bound on the size of a test set. Cong & Smith (1993) describe an application of cliques in finding a hierarchical partition of an electronic circuit into smaller subunits. In chemistry, Rhodes et al. (2003) use cliques to describe chemicals in a chemical database that have a high degree of similarity with a target structure. Kuhl, Crippen & Friesen (1983) use cliques to model the positions in which two chemicals will bind to each other. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_comedy] | [TOKENS: 2561] |
Contents Dark humor Dark humor, also known as dark comedy, black comedy, black humor, bleak comedy, gallows humor or morbid humor, is a style of comedy that makes light of subject matter that is generally considered taboo, particularly subjects that are normally considered serious or painful to discuss, aiming to provoke discomfort, serious thought, and amusement for their audience. Dark humor differs from blue comedy—which focuses more on topics such as nudity, sex, and body fluids—and from obscenity. Additionally, whereas the term dark humor is a relatively broad term covering humor relating to many serious subjects, gallows humor tends to be used more specifically in relation to death, or situations that are reminiscent of dying. Dark humor can occasionally be related to the grotesque genre. Literary critics have associated black comedy and black humor with authors as early as the ancient Greeks with Aristophanes. Etymology The term black humor (from the French humour noir) was coined by the Surrealist theorist André Breton in 1935 while interpreting the writings of Jonathan Swift. Breton's preference was to identify some of Swift's writings as a subgenre of comedy and satire in which laughter arises from cynicism and skepticism, often relying on topics such as death. Breton coined the term for his 1940 book Anthology of Black Humor (Anthologie de l'humour noir), in which he credited Jonathan Swift as the originator of black humor and gallows humor (particularly in his pieces Directions to Servants (1731), A Modest Proposal (1729), Meditation Upon a Broomstick (1710), and in a few aphorisms). In his book, Breton included excerpts from 45 other writers, including both examples in which the wit arises from a victim with which the audience empathizes, as is more typical in the tradition of gallows humor, and examples in which the comedy is used to mock the victim. In the last cases, the victim's suffering is trivialized, which leads to sympathizing with the victimizer, as analogously found in the social commentary and social criticism of the writings of, for instance, the Marquis de Sade. History Christian martyr Saint Lawrence became the patron saint of comedians because he made a dark joke during his own execution. He was sentenced to death via being burned alive on a rotisserie, during which he is said to have quipped, "Turn me over. I'm done on this side." He is also the patron saint of chefs because of this. Among the first American writers who employed black comedy in their works were Nathanael West and Vladimir Nabokov. The concept of black humor first came to nationwide attention after the publication of a 1965 mass-market paperback titled Black Humor, edited by Bruce Jay Friedman. The paperback was one of the first American anthologies devoted to the concept of black humor as a literary genre. With the paperback, Friedman labeled as "black humorists" a variety of authors, such as J. P. Donleavy, Edward Albee, Joseph Heller, Thomas Pynchon, John Barth, Vladimir Nabokov, Bruce Jay Friedman himself, and Louis-Ferdinand Céline. Among the recent writers suggested as black humorists by journalists and literary critics are Roald Dahl, Kurt Vonnegut, Warren Zevon, Christopher Durang, Philip Roth, and Veikko Huovinen. Evelyn Waugh has been called "the first contemporary writer to produce the sustained black comic novel." The motive for applying the label black humorist to the writers cited above is that they have written novels, poems, stories, plays, and songs in which profound or horrific events were portrayed in a comic manner. Comedians like Lenny Bruce, who since the late 1950s have been labeled as using "sick comedy" by mainstream journalists, have also been labeled with "black comedy". Nature and functions Sigmund Freud, in his 1927 essay Humor (Der Humor), although not mentioning 'black humor' specifically, cites a literal instance of gallows humor before writing: "The ego refuses to be distressed by the provocations of reality, to let itself be compelled to suffer. It insists that it cannot be affected by the traumas of the external world; it shows, in fact, that such traumas are no more than occasions for it to gain pleasure." Some other sociologists elaborated this concept further. Paul Lewis warns that this "relieving" aspect of gallows jokes depends on the context of the joke: whether the joke is being told by the threatened person themselves or by someone else. Black comedy has the social effect of strengthening the morale of the oppressed and undermines the morale of the oppressors. According to Wylie Sypher, "to be able to laugh at evil and error means we have surmounted them." Black comedy is a natural human instinct and examples of it can be found in stories from antiquity. Its use was widespread in middle Europe, from where it was imported to the United States.[verification needed] It is rendered with the German expression Galgenhumor (cynical last words before getting hanged). The concept of gallows humor is comparable to the French expression rire jaune (lit. yellow laughing), which also has a Germanic equivalent in the Belgian Dutch expression groen lachen (lit. green laughing). Italian comedian Daniele Luttazzi discussed gallows humor focusing on the particular type of laughter that it arouses (risata verde or groen lachen), and said that grotesque satire, as opposed to ironic satire, is the one that most often arouses this kind of laughter. In the Weimar era Kabaretts, this genre was particularly common, and according to Luttazzi, Karl Valentin and Karl Kraus were the major masters of it. Black comedy is common in professions and environments where workers routinely have to deal with dark subject matter. This includes police officers, firefighters, ambulance crews, military personnel, journalists, lawyers, and funeral directors, where it is an acknowledged coping mechanism. It has been encouraged within these professions to make note of the context in which these jokes are told, as outsiders may not react the way that those with mutual knowledge do. A 2017 study published in the journal Cognitive Processing concludes that people who appreciate dark humor "may have higher IQs, show lower aggression, and resist negative feelings more effectively than people who turn up their noses at it." See also References At least, Swift's text is preserved, and so is a prefatory note by the French writer André Breton, which emphasizes Swift's importance as the originator of black humor, of laughter that arises from cynicism and scepticism. When it comes to black humor, everything designates him as the true initiator. In fact, it is impossible to coordinate the fugitive traces of this kind of humor before him, not even in Heraclitus and the Cynics or in the works of Elizabethan dramatic poets. [...] historically justify his being presented as the first black humorist. Contrary to what Voltaire might have said, Swift was in no sense a "perfected Rabelais." He shared to the smallest possible degree Rabelais's taste for innocent, heavy-handed jokes and his constant drunken good humor. [...] a man who grasped things by reason and never by feeling, and who enclosed himself in skepticism; [...] Swift can rightfully be considered the inventor of "savage" or "gallows" humor. The term was part of the language before Freud wrote an essay on it—'gallows humor.' This is middle European humor, a response to hopeless situations. It's what a man says faced with a perfectly hopeless situation and he still manages to say something funny. Freud gives examples: A man being led out to be hanged at dawn says, 'Well, the day is certainly starting well.' It's generally called Jewish humor in this country. Actually it's humor from the peasants' revolt, the forty years' war, and from the Napoleonic wars. It's small people being pushed this way and that way, enormous armies and plagues and so forth, and still hanging on in the face of hopelessness. Jewish jokes are middle European jokes and the black humorists are gallows humorists, as they try to be funny in the face of situations which they see as just horrible. Des termes parents du Galgenhumor sont: : comédie noire, plaisanterie macabre, rire jaune. (J'en offre un autre: gibêtises). humour macabre, humeur de désespéré, (action de) rire jaune Galgenhumor propos guilleret etwas freie, gewagte Äußerung Walter Redfern, discussing puns about death, remarks: 'Related terms to gallows humour are: black comedy, sick humour, rire jaune. In all, pain and pleasure are mixed, perhaps the definitive recipe for all punning' (Puns, p. 127). En français on dit « rire jaune », en flamand « groen lachen » Les termes jaune, vert, bleu évoquent en français un certain nombre d'idées qui sont différentes de celles que suscitent les mots holandais correspondants geel, groen, blauw. Nous disons : rire jaune, le Hollandais dit : rire vert ( groen lachen ); ce que le Néerlandais appelle un vert (een groentje), c'est ce qu'en français on désigne du nom de bleu (un jeune soldat inexpéribenté)... On voit que des confrontations de ce genre permettent de concevoir une étude de la psychologie des peuples fondée sur les associations d'idées que révèlent les variations de sens (sémantique), les expressions figurées, les proverbes et les dictions. Q: Critiche feroci, interrogazioni parlamentari: momenti duri per la satira. A: Satira è far ridere a spese di chi è più ricco e potente di te. Io sono specialista nella risata verde, quella dei cabaret di Berlino degli anni Venti e Trenta. Nasce dalla disperazione. Esempio: l'Italia è un paese dove la commissione di vigilanza parlamentare Rai si comporta come la commissione stragi e viceversa. Oppure: il mistero di Ustica è irrisolto? Sono contento: il sistema funziona. racconto di satira grottesca [...] L'obiettivo del grottesco è far percepire l'orrore di una vicenda. Non è la satira cui siamo abituati in Italia: la si ritrova nel cabaret degli anni '20 e '30, poi è stata cancellata dal carico di sofferenze della guerra. Aggiungo che io avevo spiegato in apertura di serata che ci sarebbero stati momenti di satira molto diversi. Satira ironica, che fa ridere, e satira grottesca, che può far male. Perché porta alla risata della disperazione, dell'impotenza. La risata verde. Era forte, perché coinvolgeva in un colpo solo tutti i cardini satirici: politica, religione, sesso e morte. Quello che ho fatto è stato accentuare l'interazione tra gli elementi. Non era di buon gusto? Rabelais e Swift, che hanno esplorato questi lati oscuri della nostra personalità, non si sono mai posti il problema del buon gusto. Quando la satira poi riesce a far ridere su un argomento talmente drammatico di cui si ride perché non c'è altra soluzione possibile, si ha quella che nei cabaret di Berlino degli Anni '20 veniva chiamata la "risata verde". È opportuno distinguere una satira ironica, che lavora per sottrazione, da una satira grottesca, che lavora per addizione. Questo secondo tipo di satira genera più spesso la risata verde. Ne erano maestri Kraus e Valentin. |
======================================== |
[SOURCE: https://github.com/why-github] | [TOKENS: 727] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Most of the world's code lives on GitHub. Why not yours? GitHub empowers developers and enterprises to collaborate, innovate, and build securely. With AI-powered tools, built-in security testing, and seamless integration, it supports teams from first commit to enterprise development. Over 90% of Fortune 100 companies and more than 150 million developers rely on GitHub to deliver scalable, reliable, and secure solutions for teams of all sizes. Developer-first: Designed for developers, GitHub offers seamless collaboration tools that make teamwork smarter, faster, and more secure. Enterprise-grade: GitHub Enterprise scales with your organization, delivering the performance and security needed for teams of any size. AI-powered: Leverage GitHub Copilot to automate tasks and enhance productivity with smart, context-aware code suggestions. Logos for Fidelity Shopify Mercedes Benz American Airlines Adobe Ford Vodafone Spotify Home Depot The developer platform that grows with you Whether you're a small startup or a global enterprise, GitHub is designed to grow with you. The platform adapts to your needs, helping ensure that you don’t have to compromise on performance, security, or collaboration as your organization scales. Tailor your workflows with GitHub Actions and integrate seamlessly with your existing tools. GitHub's centralized access management and compliance tools help ensure your code and data remain safe. With GitHub Enterprise Cloud, you decide where your code lives while enabling security, compliance, and scalability with SaaS agility and enterprise-grade governance. 55% faster coding enabled by GitHub Copilot 80% time saved in developer onboarding $3.2M in savings by reducing developer onboarding training time through automation 75% improvement in time spent managing tools and code infrastructure Security throughout the SDLC Fix vulnerabilities before they hit production and reduce the risk of a costly breach with application security that is built in, not bolted on. Review potential vulnerabilities and get suggested fixes with Copilot Autofix to accelerate remediation and strengthen security posture. Help ensure your secrets stay secure by preventing accidental exposure in your repositories. Visualize, protect, and remediate your code's upstream dependencies. 3x faster remediation on average with Copilot Autofix 28 min from vulnerability detection to successful remediation 4.4M secrets prevented from being leaked on GitHub in 2024 The comprehensive platformfor high-performance teams GitHub is where the world builds software—faster, smarter, and more securely. Unlock the full potential of your team with an AI-native platform, seamless automation, and CI/CD workflows that help you build, scale, and innovate like never before. Harness GitHub Copilot to automate tasks, enhance code quality, and boost productivity. With intelligent, adaptive recommendations, you’ll write cleaner code quicker and accomplish more in less time. With GitHub’s integrated tools—from pull requests to project boards—collaboration is streamlined, and automation handles the heavy lifting. Keep your team aligned, reduce manual tasks, and stay focused on building great software. GitHub introduces new ways to work smarter and faster. With AI-powered tools and agentic automation, you can reduce repetitive tasks and stay in a flow state—shaping the future of software with speed and intention. Empower your team to collaborate, innovate, and build software—faster, smarter, and more securely—with the platform they know and love. Site-wide Links Get tips, technical guides, and best practices. Twice a month. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Esterel] | [TOKENS: 643] |
Contents Esterel Esterel is a synchronous programming language for the development of complex reactive systems. The imperative programming style of Esterel allows the simple expression of parallelism and preemption. As a consequence, it is well suited for control-dominated model designs. The development of the language started in the early 1980s, and was mainly carried out by a team of Ecole des Mines de Paris and INRIA led by Gérard Berry in France. Current compilers take Esterel programs and generate C code or hardware (RTL) implementations (VHDL or Verilog). The language is still under development, with several compilers out. The commercial development environment of Esterel is Esterel Studio. The company that commercialized it (Synfora) initiated a normalization process with the IEEE in April 2007 however the working group (P1778) dissolved March 2011. The reference manual is publicly available. A provisional version of Esterel has been implemented in Racket. The multiform notion of time The notion of time used in Esterel differs from that of non-synchronous languages in the following way: The notion of physical time is replaced with the notion of order. Only the simultaneity and precedence of events are considered. This means that the physical time does not play any special role. This is called multiform notion of time. An Esterel program describes a totally ordered sequence of logical instants. At each instant, an arbitrary number of events occur (including 0). Event occurrences that happen at the same logical instant are considered simultaneous. Other events are ordered as their instances of occurrences. There are two types of statements: Those that take zero time (execute and terminate in the same instant) and those that delay for a prescribed number of cycles. Signals Signals are the only means of communication. There are valued and non-valued signals. They are further categorized as being input, output, or local signals. A signal has the property of being either present or absent in an instant. Valued signals also contain a value. Signals are broadcast across the program, and that means any process can read or write a signal. The value of a valued signal can be determined in any instant, even if the signal is absent. The default status of a signal is absent. Signals remain absent until they are explicitly set to present using the emit statement. Communication is instantaneous, that means that a signal emitted in a cycle is visible immediately. Note that one can communicate back and forth in the same cycle. Thus is an erroneous program, since the writer "emit A" should run before the reader "present A", whereas this program requires "present A" to be performed first. The language statements Pure Esterel has eleven primitive statements. Esterel has several derived constructions: The full Esterel language also has statements for declaring and instantiating modules, for variables, for calling external procedures, and for valued signals. The following program emits the output O as soon as both inputs A and B have been received. Reset the behaviour whenever the input R is received. Advantages of Esterel Disadvantages of Esterel See also References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.