text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Noam_Chomsky] | [TOKENS: 13932]
Contents Noam Chomsky Avram Noam Chomsky[a] (born December 7, 1928) is an American professor and public intellectual known for his work in linguistics, political activism, and social criticism. Sometimes called "the father of modern linguistics",[b] Chomsky is also a major figure in analytic philosophy and one of the founders of the field of cognitive science. He is a laureate professor of linguistics at the University of Arizona and an institute professor emeritus at the Massachusetts Institute of Technology (MIT). Among the most cited living authors, Chomsky has written more than 150 books on topics such as linguistics, war, and politics. In addition to his work in linguistics, since the 1960s, Chomsky has been an influential voice on the American left as a consistent critic of U.S. foreign policy, contemporary capitalism, and corporate influence on political institutions and the media. Born to Ashkenazi Jewish immigrants in Philadelphia, Chomsky developed an early interest in anarchism from alternative bookstores in New York City. He studied at the University of Pennsylvania. During his postgraduate work in the Harvard Society of Fellows, Chomsky developed the theory of transformational grammar for which he earned his doctorate in 1955. That year he began teaching at MIT, and in 1957 emerged as a significant figure in linguistics with his landmark work Syntactic Structures, which played a major role in remodeling the study of language. From 1958 to 1959 Chomsky was a National Science Foundation fellow at the Institute for Advanced Study. He created or co-created the universal grammar theory, the generative grammar theory, the Chomsky hierarchy, and the minimalist program. Chomsky also played a pivotal role in the decline of linguistic behaviorism, and was particularly critical of the work of B. F. Skinner. An outspoken opponent of U.S. involvement in the Vietnam War, which he saw as an act of American imperialism, in 1967 Chomsky rose to national attention for his anti-war essay "The Responsibility of Intellectuals". Becoming associated with the New Left, he was arrested multiple times for his activism and placed on President Richard Nixon's list of political opponents. While expanding his work in linguistics over subsequent decades, he also became involved in the linguistics wars. In collaboration with Edward S. Herman, Chomsky later articulated the propaganda model of media criticism in Manufacturing Consent, and worked to expose the Indonesian occupation of East Timor. His defense of unconditional freedom of speech, including that of Holocaust denial, generated significant controversy in the Faurisson affair of the 1980s. Chomsky's commentary on the Cambodian genocide and the Bosnian genocide also generated controversy. Since retiring from active teaching at MIT, he has continued his vocal political activism, including opposing the 2003 invasion of Iraq and supporting the Occupy movement. An anti-Zionist, Chomsky considers Israel's treatment of Palestinians to be worse than South African–style apartheid, and criticizes U.S. support for Israel. Chomsky is widely recognized as having helped to spark the cognitive revolution in the human sciences, contributing to the development of a new cognitivistic framework for the study of language and the mind. Chomsky remains a leading critic of U.S. foreign policy, contemporary capitalism, U.S. involvement and Israel's role in the Israeli–Palestinian conflict, and mass media. Chomsky and his ideas remain highly influential in the anti-capitalist and anti-imperialist movements. Life Chomsky was born on December 7, 1928, in the East Oak Lane neighborhood of Philadelphia, Pennsylvania. His parents, William Chomsky and Elsie Simonofsky, were Ashkenazi Jewish immigrants. In 1913, William fled the Russian Empire, from what is now Ukraine, to escape conscription, and worked in Baltimore sweatshops and Hebrew elementary schools before attending college. Elsie immigrated from the region of what is present-day Belarus. Both parents' first language was Yiddish although it was taboo to speak it at home; his father spoke English with a foreign accent while his mother spoke a native New York City English dialect. After moving to Philadelphia, William became principal of the Congregation Mikveh Israel religious school and joined the Gratz College faculty. He placed great emphasis on educating people so that they would be "well integrated, free and independent in their thinking, concerned about improving and enhancing the world, and eager to participate in making life more meaningful and worthwhile for all", a mission that shaped and was subsequently adopted by his son. Elsie, who also taught at Mikveh Israel, shared her leftist politics and care for social issues with her sons. Noam's only sibling, David Eli Chomsky, was born five years later, and worked as a cardiologist in Philadelphia. The brothers were close, though David was more easygoing while Noam could be very competitive. They were raised Jewish, being taught Hebrew and regularly involved with discussing the political theories of Zionism; the family was particularly influenced by the Left Zionist writings of Ahad Ha'am. He faced antisemitism as a child, particularly from Philadelphia's Irish and German communities. Chomsky attended the independent, Deweyite Oak Lane Country Day School and Philadelphia's Central High School, where he excelled academically and joined various clubs and societies, but was troubled by the school's hierarchical and domineering teaching methods. He also attended Hebrew High School at Gratz College, where his father taught. Chomsky has described his parents as "normal Roosevelt Democrats" with center-left politics, but relatives involved in the International Ladies' Garment Workers' Union exposed him to socialism and far-left politics. He was substantially influenced by his uncle and the Jewish leftists who frequented his New York City newspaper stand to debate current affairs. Chomsky himself often visited left-wing and anarchist bookstores when visiting his uncle in the city, voraciously reading political literature. He became absorbed in the story of the 1939 fall of Barcelona and suppression of the Spanish anarchosyndicalist movement, writing his first article on the topic at the age of 10. That he came to identify with anarchism first rather than another leftist movement, he described as a "lucky accident". Chomsky was firmly anti-Bolshevik by his early teens. In 1945, at the age of 16, Chomsky began a general program of study at the University of Pennsylvania, where he explored philosophy, logic, and languages and developed a primary interest in learning Arabic. Living at home, he funded his undergraduate degree by teaching Hebrew. Frustrated with his experiences at the university, he considered dropping out and moving to a kibbutz in Mandatory Palestine, but his intellectual curiosity was reawakened through conversations with the linguist Zellig Harris, whom he first met in a political circle in 1947. Harris introduced Chomsky to the field of theoretical linguistics and convinced him to major in the subject. Chomsky's BA honors thesis, "Morphophonemics of Modern Hebrew", applied Harris's methods to the language. Chomsky revised this thesis for his MA, which he received from the University of Pennsylvania in 1951; it was subsequently published as a book. He also developed his interest in philosophy while at university, in particular under the tutelage of Nelson Goodman. From 1951 to 1955, Chomsky was a member of the Society of Fellows at Harvard University, where he undertook research on what became his doctoral dissertation. Having been encouraged by Goodman to apply, Chomsky was attracted to Harvard in part because the philosopher Willard Van Orman Quine was based there. Both Quine and a visiting philosopher, J. L. Austin of the University of Oxford, strongly influenced Chomsky. In 1952, Chomsky published his first academic article in The Journal of Symbolic Logic. Highly critical of the established behaviorist currents in linguistics, in 1954, he presented his ideas at lectures at the University of Chicago and Yale University. He had not been registered as a student at Pennsylvania for four years, but in 1955 he submitted a thesis setting out his ideas on transformational grammar; he was awarded a Doctor of Philosophy degree for it, and it was privately distributed among specialists on microfilm before being published in 1975 as part of The Logical Structure of Linguistic Theory. Harvard professor George Armitage Miller was impressed by Chomsky's thesis and collaborated with him on several technical papers in mathematical linguistics. Chomsky's doctorate exempted him from compulsory military service, which was otherwise due to begin in 1955. In 1947, Chomsky began a romantic relationship with Carol Doris Schatz, whom he had known since early childhood. They married in 1949. After Chomsky was made a Fellow at Harvard, the couple moved to the Allston area of Boston and remained there until 1965, when they relocated to the suburb of Lexington. The couple took a Harvard travel grant to Europe in 1953. He enjoyed living in Hashomer Hatzair's HaZore'a kibbutz while in Israel, but was appalled by his interactions with Jewish nationalism, anti-Arab racism and, within the kibbutz's leftist community, Stalinism. On visits to New York City, Chomsky continued to frequent the office of the Yiddish anarchist journal Fraye Arbeter Shtime and became enamored with the ideas of Rudolf Rocker, a contributor whose work introduced Chomsky to the link between anarchism and classical liberalism. Chomsky also read other political thinkers: the anarchists Mikhail Bakunin and Diego Abad de Santillán, democratic socialists George Orwell, Bertrand Russell, and Dwight Macdonald, and works by Marxists Karl Liebknecht, Karl Korsch, and Rosa Luxemburg. His politics were reaffirmed by Orwell's depiction of Barcelona's functioning anarchist society in Homage to Catalonia (1938). Chomsky read the leftist journal Politics, which furthered his interest in anarchism, and the council communist periodical Living Marxism, though he rejected the Marxist orthodoxy of its editor, Paul Mattick. Chomsky befriended two linguists at the Massachusetts Institute of Technology (MIT)—Morris Halle and Roman Jakobson—the latter of whom secured him an assistant professor position there in 1955. At MIT, Chomsky spent half his time on a mechanical translation project and half teaching a course on linguistics and philosophy. He described MIT as open to experimentation where he was free to pursue his idiosyncratic interests. MIT promoted him to the position of associate professor in 1957, and over the next year he was also a visiting professor at Columbia University. The Chomskys had their first child, Aviva, that same year. He also published his first book on linguistics, Syntactic Structures, a work that radically opposed the dominant Harris–Bloomfield trend in the field. Responses to Chomsky's ideas ranged from indifference to hostility, and his work proved divisive and caused "significant upheaval" in the discipline. The linguist John Lyons later asserted that Syntactic Structures "revolutionized the scientific study of language". From 1958 to 1959 Chomsky was a National Science Foundation fellow at the Institute for Advanced Study in Princeton, New Jersey. Chomsky's provocative critique of B. F. Skinner, who viewed language as entirely learned behavior, and that critique's challenge to the dominant behaviorist paradigm thrust Chomsky into the limelight. Chomsky argued that behaviorism underplayed the role of human creativity in learning language and overplayed the role of external conditions in influencing verbal behavior. He proceeded to found MIT's graduate program in linguistics with Halle. In 1961, Chomsky received tenure and became a full professor in the Department of Modern Languages and Linguistics. He was appointed plenary speaker at the Ninth International Congress of Linguists, held in 1962 in Cambridge, Massachusetts, which established him as the de facto spokesperson of American linguistics. Between 1963 and 1965 he consulted on a military-sponsored project to teach computers to understand natural English commands from military generals. Chomsky continued to publish his linguistic ideas throughout the decade, including in Aspects of the Theory of Syntax (1965), Topics in the Theory of Generative Grammar (1966), and Cartesian Linguistics: A Chapter in the History of Rationalist Thought (1966). Along with Halle, he also edited the Studies in Language series of books for Harper and Row. As he began to accrue significant academic recognition and honors for his work, Chomsky lectured at the University of California, Berkeley, in 1966. These lectures were published as Language and Mind in 1968. In the late 1960s, a high-profile intellectual rift later known as the linguistic wars developed between Chomsky and some of his colleagues and doctoral students—including Paul Postal, John Ross, George Lakoff, and James D. McCawley—who contended that Chomsky's syntax-based, interpretivist linguistics did not properly account for semantic context (general semantics). A post hoc assessment of this period concluded that the opposing programs ultimately were complementary, each informing the other. [I]t does not require very far-reaching, specialized knowledge to perceive that the United States was invading South Vietnam. And, in fact, to take apart the system of illusions and deception which functions to prevent understanding of contemporary reality [is] not a task that requires extraordinary skill or understanding. It requires the kind of normal skepticism and willingness to apply one's analytical skills that almost all people have and that they can exercise. Chomsky joined protests against U.S. involvement in the Vietnam War in 1962, speaking on the subject at small gatherings in churches and homes. His 1967 critique of U.S. involvement, "The Responsibility of Intellectuals", among other contributions to The New York Review of Books, debuted Chomsky as a public dissident. This essay and other political articles were collected and published in 1969 as part of Chomsky's first political book, American Power and the New Mandarins. He followed this with further political books, including At War with Asia (1970), The Backroom Boys (1973), For Reasons of State (1973), and Peace in the Middle East? (1974), published by Pantheon Books. These publications led to Chomsky's association with the American New Left movement, though he thought little of prominent New Left intellectuals Herbert Marcuse and Erich Fromm and preferred the company of activists to that of intellectuals. Chomsky remained largely ignored by the mainstream press throughout this period. Chomsky also became involved in left-wing activism. Chomsky refused to pay half his taxes, publicly supported students who refused the draft, and was arrested while participating in an anti-war teach-in outside the Pentagon. During this time, Chomsky co-founded the anti-war collective RESIST with Hans Koning, Mitchell Goodman, Denise Levertov, William Sloane Coffin, and Dwight Macdonald. Although he questioned the objectives of the 1968 student protests, Chomsky regularly gave lectures to student activist groups and, with his colleague Louis Kampf, ran undergraduate courses on politics at MIT independently of the conservative-dominated political science department. When student activists campaigned to stop weapons and counterinsurgency research at MIT, Chomsky was sympathetic but felt that the research should remain under MIT's oversight and limited to systems of deterrence and defense. Chomsky has acknowledged that his MIT lab's funding at this time came from the military. He later said he considered resigning from MIT during the Vietnam War. There has since been a wide-ranging debate about what effects Chomsky's employment at MIT had on his political and linguistic ideas. Chomsky's anti-war activism led to his arrest on multiple occasions and he was on President Richard Nixon's master list of political opponents. Chomsky was aware of the potential repercussions of his civil disobedience, and his wife began studying for her own doctorate in linguistics to support the family in the event of Chomsky's imprisonment or joblessness. Chomsky's scientific reputation insulated him from administrative action based on his beliefs. In 1970 he visited southeast Asia to lecture at Vietnam's Hanoi University of Science and Technology and toured war refugee camps in Laos. In 1973 he helped lead a committee commemorating the 50th anniversary of the War Resisters League. Chomsky's work in linguistics continued to gain international recognition as he received multiple honorary doctorates. He delivered public lectures at the University of Cambridge, Columbia University (Woodbridge Lectures), and Stanford University. His appearance in a 1971 debate with French continental philosopher Michel Foucault positioned Chomsky as a symbolic figurehead of analytic philosophy. He continued to publish extensively on linguistics, producing Studies on Semantics in Generative Grammar (1972), an enlarged edition of Language and Mind (1972), and Reflections on Language (1975). In 1974 Chomsky became a corresponding fellow of the British Academy. In the late 1970s and 1980s, Chomsky's linguistic publications expanded and clarified his earlier work, addressing his critics and updating his grammatical theory. His political talks often generated considerable controversy, particularly when he criticized the Israeli government and military. In the early 1970s Chomsky began collaborating with Edward S. Herman, who had also published critiques of the U.S. war in Vietnam. Together they wrote Counter-Revolutionary Violence: Bloodbaths in Fact & Propaganda, a book that criticized U.S. military involvement in Southeast Asia and the mainstream media's failure to cover it. Warner Modular published it in 1973, but its parent company disapproved of the book's contents and ordered all copies destroyed. While mainstream publishing options proved elusive, Chomsky found support from Michael Albert's South End Press, an activist-oriented publishing company. In 1979, South End published Chomsky and Herman's revised Counter-Revolutionary Violence as the two-volume The Political Economy of Human Rights, which compares U.S. media reactions to the Cambodian genocide and the Indonesian occupation of East Timor. It argues that because Indonesia was a U.S. ally, U.S. media ignored the East Timorese situation while focusing on events in Cambodia, a U.S. enemy. Chomsky's response included two testimonials before the United Nations' Special Committee on Decolonization, successful encouragement for American media to cover the occupation, and meetings with refugees in Lisbon. Marxist academic Steven Lukes most prominently publicly accused Chomsky of betraying his anarchist ideals and acting as an apologist for Cambodian leader Pol Pot. Herman said that the controversy "imposed a serious personal cost" on Chomsky, who considered the personal criticism less important than the evidence that "mainstream intelligentsia suppressed or justified the crimes of their own states". Chomsky had long publicly criticized Nazism, and totalitarianism more generally, but his commitment to freedom of speech led him to defend the right of French historian Robert Faurisson to advocate a position widely characterized as Holocaust denial. Without Chomsky's knowledge, his plea for Faurisson's freedom of speech was published as the preface to the latter's 1980 book Mémoire en défense contre ceux qui m'accusent de falsifier l'histoire. Chomsky was widely condemned for defending Faurisson, and France's mainstream press accused Chomsky of being a Holocaust denier himself, refusing to publish his rebuttals to their accusations. Critiquing Chomsky's position, sociologist Werner Cohn later published an analysis of the affair titled Partners in Hate: Noam Chomsky and the Holocaust Deniers. The Faurisson affair had a lasting, damaging effect on Chomsky's career, especially in France. In 1985, during the Nicaraguan Contra War—in which the U.S. supported the contra militia against the Sandinista government—Chomsky traveled to Managua to meet with workers' organizations and refugees of the conflict, giving public lectures on politics and linguistics. Many of these lectures were published in 1987 as On Power and Ideology: The Managua Lectures. In 1983 he published The Fateful Triangle, which argued that the U.S. had continually used the Israeli–Palestinian conflict for its own ends. In 1988, Chomsky visited the Palestinian territories to witness the impact of Israeli occupation. Chomsky and Herman's Manufacturing Consent: The Political Economy of the Mass Media (1988) outlines their propaganda model for understanding mainstream media. Even in countries without official censorship, they argued, the news is censored through five filters that greatly influence both what and how news is presented. The book received a 1992 film adaptation. In 1989, Chomsky published Necessary Illusions: Thought Control in Democratic Societies, in which he suggests that a worthwhile democracy requires that its citizens undertake intellectual self-defense against the media and elite intellectual culture that seeks to control them. By the 1980s, Chomsky's students had become prominent linguists who, in turn, expanded and revised his linguistic theories. In the 1990s, Chomsky embraced political activism to a greater degree than before. Retaining his commitment to the cause of East Timorese independence, in 1995 he visited Australia to talk on the issue at the behest of the East Timorese Relief Association and the National Council for East Timorese Resistance. The lectures he gave on the subject were published as Powers and Prospects in 1996. As a result of the international publicity Chomsky generated, his biographer Wolfgang Sperlich opined that he did more to aid the cause of East Timorese independence than anyone but the investigative journalist John Pilger. After East Timor attained independence from Indonesia in 1999, the Australian-led International Force for East Timor arrived as a peacekeeping force; Chomsky was critical of this, believing it was designed to secure Australian access to East Timor's oil and gas reserves under the Timor Gap Treaty. Chomsky was widely interviewed after the September 11 attacks in 2001 as the American public attempted to make sense of the attacks. He argued that the ensuing war on terror was not a new development but a continuation of U.S. foreign policy and concomitant rhetoric since at least the Reagan era. He gave the D.T. Lakdawala Memorial Lecture in New Delhi in 2001, and in 2003 visited Cuba at the invitation of the Latin American Association of Social Scientists. Chomsky's 2003 Hegemony or Survival articulated what he called the United States' "imperial grand strategy" and critiqued the Iraq War and other aspects of the war on terror. Chomsky toured internationally with greater regularity during this period. Chomsky retired from MIT in 2002, but continued to conduct research and seminars on campus as an emeritus. That same year he visited Turkey to attend the trial of a publisher who had been accused of treason for printing one of Chomsky's books; Chomsky insisted on being a co-defendant and amid international media attention, the Security Courts dropped the charge on the first day. During that trip Chomsky visited Kurdish areas of Turkey and spoke out in favor of the Kurds' human rights. A supporter of the World Social Forum, he attended its conferences in Brazil in both 2002 and 2003, also attending the Forum event in India. Chomsky supported the 2011 Occupy movement, speaking at encampments and publishing on the movement, which he called a reaction to a 30-year class war. The 2015 documentary Requiem for the American Dream summarizes his views on capitalism and economic inequality through a "75-minute teach-in". In 2015, Chomsky and his wife purchased a residence in São Paulo, Brazil, and began splitting their time between Brazil and the U.S. Chomsky taught a short-term politics course at the University of Arizona in 2017. He was later hired as the Agnese Nelms Haury Chair in the Agnese Nelms Haury Program in Environment and Social Justice, a part-time professorship in the linguistics department with duties including teaching and public seminars. His salary was covered by philanthropic donations. After a stroke in June 2023, Chomsky moved to Brazil full-time. Linguistic theory What started as purely linguistic research ... has led, through involvement in political causes and an identification with an older philosophic tradition, to no less than an attempt to formulate an overall theory of man. The roots of this are manifest in the linguistic theory ... The discovery of cognitive structures common to the human race but only to humans (species specific), leads quite easily to thinking of unalienable human attributes. The basis of Chomsky's linguistic theory lies in biolinguistics, the linguistic school that holds that the principles underpinning the structure of language are biologically preset in the human mind and hence genetically inherited. He argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences. In adopting this position Chomsky rejects the radical behaviorist psychology of B. F. Skinner, who viewed speech, thought, and all behavior as a completely learned product of the interactions between organisms and their environments. Accordingly, Chomsky argues that language is a unique evolutionary development of the human species and distinguished from modes of communication used by any other animal species. Chomsky argues that his nativist, internalist view of language is consistent with the philosophical school of "rationalism" and contrasts with the anti-nativist, externalist view of language consistent with the philosophical school of "empiricism", which contends that all knowledge, including language, comes from external stimuli. Historians have disputed Chomsky's claim about rationalism on the basis that his theory of innate grammar excludes propositional knowledge and instead focuses on innate learning capacities or structures. Since the 1960s, Chomsky has maintained that syntactic knowledge is partially inborn, implying that children need only learn certain language-specific features of their native languages. He bases his argument on observations about human language acquisition and describes a "poverty of the stimulus": an enormous gap between the linguistic stimuli to which children are exposed and the rich linguistic competence they attain. For example, although children are exposed to only a very small and finite subset of the allowable syntactic variants within their first language, they somehow acquire the highly organized and systematic ability to understand and produce an infinite number of sentences, including ones that have never before been uttered, in that language. To explain this, Chomsky proposed that the primary linguistic data must be supplemented by an innate linguistic capacity. Furthermore, while a human baby and a kitten are both capable of inductive reasoning, if they are exposed to exactly the same linguistic data, the human will always acquire the ability to understand and produce language, while the kitten will never acquire either ability. Chomsky referred to this difference in capacity as the language acquisition device, and suggested that linguists needed to determine both what that device is and what constraints it imposes on the range of possible human languages. The universal features that result from these constraints would constitute "universal grammar". Multiple researchers have challenged universal grammar on the grounds of the evolutionary infeasibility of its genetic basis for language, the lack of crosslinguistic surface universals, and the unproven link between innate/universal structures and the structures of specific languages. Michael Tomasello has challenged Chomsky's theory of innate syntactic knowledge as based on theory and not behavioral observation. The empirical basis of poverty of the stimulus arguments has been challenged by Geoffrey Pullum and others, leading to back-and-forth debate in the language acquisition literature. Recent work has also suggested that some recurrent neural network architectures can learn hierarchical structure without an explicit constraint. Chomsky is generally credited with launching the research tradition of generative grammar, which aims to explain the cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Generative grammar proposes models of language consisting of explicit rule systems, which make testable falsifiable predictions. The goal of generative grammar is sometimes described as answering the question "What is that that you know when you know a language?" Within generative grammar, Chomsky's initial model was called transformational grammar. Chomsky developed transformational grammar in the mid-1950s, whereupon it became the dominant syntactic theory in linguistics for two decades. "Transformations" are syntactic rules that derive surface structure from deep structure, which was often considered to reflect the structure of meaning. Transformational grammar later developed into the 1980s government and binding theory and thence into the minimalist program. This research focused on the principles and parameters framework, which explained children's ability to learn any language by filling open parameters (a set of universal grammar principles) that adapt as the child encounters linguistic data. The minimalist program, initiated by Chomsky, asks which minimal principles and parameters theory fits most elegantly, naturally, and simply. Chomsky is commonly credited with inventing transformational-generative grammar, but his original contribution was considered modest when he first published his theory. In his 1955 dissertation and his 1957 textbook Syntactic Structures, he presented recent developments in the analysis formulated by Zellig Harris, who was Chomsky's PhD supervisor, and by Charles F. Hockett.[c] Their method derives from the work of the structural linguist Louis Hjelmslev, who introduced algorithmic grammar to general linguistics.[d] Based on this rule-based notation of grammars, Chomsky grouped logically possible phrase-structure grammar types into a series of four nested subsets and increasingly complex types, together known as the Chomsky hierarchy. This classification remains relevant to formal language theory and theoretical computer science, especially programming language theory, compiler construction, and automata theory. Chomsky's Syntactic Structures became, beyond generative linguistics as such, a catalyst for connecting what in Hjelmslev's and Jespersen's time was the beginnings of structural linguistics, which has become cognitive linguistics. Political views The second major area to which Chomsky has contributed—and surely the best known in terms of the number of people in his audience and the ease of understanding what he writes and says—is his work on sociopolitical analysis; political, social, and economic history; and critical assessment of current political circumstance. In Chomsky's view, although those in power might—and do—try to obscure their intentions and to defend their actions in ways that make them acceptable to citizens, it is easy for anyone who is willing to be critical and consider the facts to discern what they are up to. Chomsky is a prominent political dissident.[e] His political views have changed little since his childhood, when he was influenced by the emphasis on political activism that was ingrained in Jewish working-class tradition. He usually identifies as an anarcho-syndicalist or a libertarian socialist. He views these positions not as precise political theories but as ideals that he thinks best meet human needs: liberty, community, and freedom of association. Unlike some other socialists, such as Marxists, Chomsky believes that politics lies outside the remit of science, but he still roots his ideas about an ideal society in empirical data and empirically justified theories. In Chomsky's view, the truth about political realities is systematically distorted or suppressed by an elite corporatocracy, which uses corporate media, advertising, and think tanks to promote its own propaganda. His work seeks to reveal such manipulations and the truth they obscure. Chomsky believes this web of falsehood can be broken by "common sense", critical thinking, and understanding the roles of self-interest and self-deception, and that intellectuals abdicate their moral responsibility to tell the truth about the world in fear of losing prestige and funding. He argues that, as such an intellectual, it is his duty to use his social privilege, resources, and training to aid popular democracy movements in their struggles. Although he has participated in direct action demonstrations—joining protests, being arrested, organizing groups—Chomsky's primary political outlet is education, i.e., free public lessons. Chomsky is a longtime member of the Democratic Socialists of America (DSA) and a longtime member of the Industrial Workers of the World (IWW) international union, as was his father. Chomsky has been a prominent critic of American imperialism, but is not a pacifist, believing World War II was justified as America's last defensive war. He believes that U.S. foreign policy's basic principle is the establishment of "open societies" that are economically and politically controlled by the U.S. and where U.S.-based businesses can prosper. He argues that the U.S. seeks to suppress any movements within these countries that are not compliant with U.S. interests and to ensure that U.S.-friendly governments are placed in power. When discussing current events, he emphasizes their place within a wider historical perspective. He believes that official, sanctioned historical accounts of U.S. and British extraterritorial operations have consistently whitewashed these nations' actions in order to present them as having benevolent motives in either spreading democracy or, in older instances, spreading Christianity; by criticizing these accounts, he seeks to correct them. Prominent examples he regularly cites are the actions of the British Empire in India and Africa and U.S. actions in Vietnam, the Philippines, Latin America, and the Middle East. Chomsky's political work has centered heavily on criticizing the actions of the United States. He has said he focuses on the U.S. because the country has militarily and economically dominated the world during his lifetime and because its liberal democratic electoral system allows the citizenry to influence government policy. His hope is that, by spreading awareness of the impact U.S. foreign policies have on the populations affected by them, he can sway the populations of the U.S. and other countries into opposing the policies. He urges people to criticize their governments' motivations, decisions, and actions, to accept responsibility for their own thoughts and actions, and to apply the same standards to others as to themselves. Chomsky has been critical of U.S. involvement in the Israeli–Palestinian conflict, arguing that it has consistently blocked a peaceful settlement. He also criticizes the U.S.'s close ties with Saudi Arabia and involvement in Saudi Arabian-led intervention in Yemen, highlighting that Saudi Arabia has "one of the most grotesque human rights records in the world". Chomsky called the Russian invasion of Ukraine a criminal act of aggression and noted that Russia was committing major war crimes in the country. He considered support for Ukraine's self-defense legitimate and said Ukraine should be given enough military aid to defend itself, but not enough to cause "an escalation". His criticism of the war focused on the United States. He alleged that the U.S. rejected any compromise with Russia and that this might have provoked the invasion. According to Chomsky, the U.S. was arming Ukraine only to weaken Russia, and Ukrainian requests for heavy weaponry were untrue "Western propaganda", despite Ukraine's President Volodymyr Zelenskyy repeatedly asking for them. More than a year into the invasion, Chomsky argued that Russia was waging the war "more humanely" than the U.S. did the invasion of Iraq. In his youth, Chomsky developed a dislike of capitalism and the pursuit of material wealth. At the same time, he developed a disdain for authoritarian socialism, as represented by the Marxist–Leninist policies of the Soviet Union. Rather than accepting the common view among U.S. economists that a spectrum exists between total state ownership of the economy and total private ownership, he instead suggests that a spectrum should be understood between total democratic control of the economy and total autocratic control (whether state or private). He argues that Western capitalist countries are not really democratic, because, in his view, a truly democratic society is one in which all persons have a say in public economic policy. He has stated his opposition to ruling elites, among them institutions like the IMF, World Bank, and GATT (precursor to the WTO). Chomsky highlights that, since the 1970s, the U.S. has become increasingly economically unequal as a result of the repeal of various financial regulations and the unilateral rescinding of the Bretton Woods financial control agreement by the U.S. He characterizes the U.S. as a de facto one-party state, viewing both the Republican Party and Democratic Party as manifestations of a single "Business Party" controlled by corporate and financial interests. Chomsky highlights that, within Western capitalist liberal democracies, at least 80% of the population has no control over economic decisions, which are instead in the hands of a management class and ultimately controlled by a small, wealthy elite. Noting the entrenchment of such an economic system, Chomsky believes that change is possible through the organized cooperation of large numbers of people who understand the problem and know how they want to reorganize the economy more equitably. Acknowledging that corporate domination of media and government stifles any significant change to this system, he sees reason for optimism in historical examples such as the social rejection of slavery as immoral, the advances in women's rights, and the forcing of government to justify invasions. He views violent revolution to overthrow a government as a last resort to be avoided if possible, citing the example of historical revolutions where the population's welfare has worsened as a result of upheaval. Chomsky sees libertarian socialist and anarcho-syndicalist ideas as the descendants of the classical liberal ideas of the Age of Enlightenment, arguing that his ideological position revolves around "nourishing the libertarian and creative character of the human being". He envisions an anarcho-syndicalist future with direct worker control of the means of production and government by workers' councils, who would select temporary and revocable representatives to meet together at general assemblies. The point of this self-governance is to make each citizen, in Thomas Jefferson's words, "a direct participator in the government of affairs". He believes that there will be no need for political parties. By controlling their productive life, he believes that individuals can gain job satisfaction and a sense of fulfillment and purpose. He argues that unpleasant and unpopular jobs could be fully automated, specially remunerated, or communally shared. Chomsky has written prolifically about the Israeli–Palestinian conflict, aiming to raise public awareness of it. A labor Zionist who later became what is today considered an anti-Zionist, Chomsky has criticized the Israeli settlements in the Israeli-occupied West Bank, which he likens to a settler colony. He has said that the 1947 United Nations Partition Plan for Palestine was a bad decision, but given the realpolitik of the situation, he has also considered a two-state solution on the condition that the nation-states exist on equal terms. Chomsky has said that characterizing Israel's treatment of the Palestinians as apartheid, similar to the system that existed in South Africa, would be a "gift to Israel", as he has long held that "the Occupied Territories are much worse than South Africa". South Africa depended on its black population for labor, but Chomsky argues the same is not true of Israel, which in his view seeks to make the situation for Palestinians under its occupation unlivable, especially in the West Bank and the Gaza Strip, where "atrocities" take place every day. He also argues that, unlike South Africa, Israel has not sought the international community's approval, but rather relies solely on U.S. support. Chomsky has said that the Israeli-led blockade of the Gaza Strip has turned it into a "concentration camp" and expressed fears similar to Israeli intellectual Yeshayahu Leibowitz's 1990s warning that the continued occupation of the Palestinian territories could turn Israeli Jews into "Judeo-Nazis". Chomsky has said that Leibowitz's warning "was a direct reflection of the continued occupation, the humiliation of people, the degradation, and the terrorist attacks by the Israeli government". He has also called the U.S. a violent state that exports violence by supporting Israeli "atrocities" against the Palestinians and said that listening to American mainstream media, including CBS, is like listening to "Israeli propaganda agencies". Chomsky was denied entry to the West Bank in 2010 because of his criticisms of Israel. He had been invited to deliver a lecture at Bir Zeit University and was to meet with Palestinian Prime Minister Salam Fayyad. An Israeli Foreign Ministry spokesman later said that Chomsky was denied entry by mistake. In his 1983 book The Fateful Triangle, Chomsky criticized the Palestine Liberation Organization for its "self-destructiveness" and "suicidal character" and disapproved of its programs of "armed struggle" and "erratic violence". He also criticized the Arab governments as not "decent". Given what he has described as his very Jewish upbringing with deeply Zionist activist parents, Chomsky's views have drawn controversy and criticism. They are rooted in the kibbutzim and socialist binational cooperation. In a 2014 interview on Democracy Now!, Chomsky said that the charter of Hamas, which calls for Israel's destruction, "means practically nothing", having been created "by a small group of people under siege, under attack in 1988". He compared it to the electoral program of the Likud party, which, he said, "states explicitly that there can never be a Palestinian state west of the Jordan River. And they not only state it in their charter, that's a call for the destruction of Palestine, explicit call for it". Chomsky's political writings have largely focused on ideology, social and political power, mass media, and state policy. One of his best-known works, Manufacturing Consent, dissects the media's role in reinforcing and acquiescing to state policies across the political spectrum while marginalizing contrary perspectives. Chomsky asserts that this version of censorship, by government-guided "free market" forces, is subtler and harder to undermine than was the equivalent propaganda system in the Soviet Union. As he argues, the mainstream press is corporate-owned and thus reflects corporate priorities and interests. Acknowledging that many American journalists are dedicated and well-meaning, he argues that the mass media's choices of topics and issues, the unquestioned premises on which that coverage rests, and the range of opinions expressed are all constrained to reinforce the state's ideology: although mass media will criticize individual politicians and political parties, it will not undermine the wider state-corporate nexus of which it is a part. As evidence, he highlights that the U.S. mass media does not employ any socialist journalists or political commentators. He also points to examples of important news stories that the U.S. mainstream media has ignored because reporting on them would reflect badly upon the country, including the murder of Black Panther Fred Hampton with possible FBI involvement, the massacres in Nicaragua perpetrated by U.S.-funded Contras, and the constant reporting on Israeli deaths without equivalent coverage of the far larger number of Palestinian deaths in that conflict. To remedy this situation, Chomsky calls for grassroots democratic control and involvement of the media. Chomsky considers most conspiracy theories fruitless, distracting substitutes for thinking about policy formation in an institutional framework, where individual manipulation is secondary to broader social imperatives. He separates his Propaganda Model from conspiracy in that he is describing institutions following their natural imperatives rather than collusive forces with secret controls. Instead of supporting the educational system as an antidote, he believes that most education is counterproductive. Chomsky describes mass education as a system solely intended to turn farmers from independent producers into unthinking industrial employees. In the 2004 book The Anti-Chomsky Reader, Peter Collier and David Horowitz accuse Chomsky of cherry-picking facts to suit his theories. Horowitz has also criticized Chomsky's anti-Americanism: For 40 years Noam Chomsky has turned out book after book, pamphlet after pamphlet and speech after speech with one message, and one message alone: America is the Great Satan; it is the fount of evil in the world. In Chomsky's demented universe, America is responsible not only for its own bad deeds, but for the bad deeds of others, including those of the terrorists who struck the World Trade Center and the Pentagon. In this attitude he is the medium for all those who now search the ruins of Manhattan not for the victims and the American dead, but for the "root causes" of the catastrophe that befell them. For the conservative public policy think tank the Hoover Institution, Peter Schweizer wrote in January 2006, "Chomsky favors the estate tax and massive income redistribution—just not the redistribution of his income." Schweizer criticized Chomsky for setting up an estate plan and protecting his own intellectual property as it relates to his published works, as well as the high speaking fees that Chomsky received on a regular basis, around $9,000–$12,000 per talk at that time. Mark Bauerlein has accused Chomsky of credulity about socialist or communist regimes while examining capitalist regimes with greater scrutiny or criticism: Chomsky's analysis of U.S. actions plunged deep into dark U.S. machinations, but when traveling among the Communists he rested content with appearances. The countryside outside Hanoi, he reported in The New York Review of Books, displayed "a high degree of democratic participation at the village and regional levels." But how could he tell? Chomsky did not speak Vietnamese, and so he depended on government translators, tour guides, and handlers for information. In [Communist] Vietnamese hands, the clear-eyed skepticism turned into willing credulousness. According to Nikolas Kozloff, writing for Al Jazeera in September 2012, Chomsky "has drawn the world's attention to the various misdeeds of the US and its proxies around the world, and for that he deserves credit. Yet, in seeking to avoid controversy at all costs Chomsky has turned into something of an ideologue. Scour the Chomsky web site and you won't find significant discussion of Belarus or Latin America's flirtation with outside authoritarian leaders, for that matter." Political activist George Monbiot has argued that "Part of the problem is that a kind of cult has developed around Noam Chomsky and John Pilger, which cannot believe they could ever be wrong, and produces ever more elaborate conspiracy theories to justify their mistakes." Defenders of Chomsky have countered that he has been censored or left out of public debate. Claims of this nature date to the Reagan era. Writing for The Washington Post in February 1988, Saul Landau wrote, "It is unhealthy that Chomsky's insights are excluded from the policy debate. His relentless prosecutorial prose, with a hint of Talmudic whine and the rationalist anarchism of Tom Paine, may reflect a justified frustration." Philosophy Chomsky has also been active in a number of philosophical fields, including philosophy of mind, philosophy of language, and philosophy of science. In these fields he is credited with ushering in the "cognitive revolution", a significant paradigm shift that rejected logical positivism, the prevailing philosophical methodology of the time, and reframed how philosophers think about language and the mind. Chomsky views the cognitive revolution as rooted in 17th-century rationalist ideals. His position—the idea that the mind contains inherent structures to understand language, perception, and thought—has more in common with rationalism than behaviorism. He named one of his key works Cartesian Linguistics: A Chapter in the History of Rationalist Thought (1966). This sparked criticism from historians and philosophers who disagreed with Chomsky's interpretations of classical sources and use of philosophical terminology.[f] In the philosophy of language, Chomsky is particularly known for his criticisms of the notion of reference and meaning in human language and his perspective on the nature and function of mental representations. Chomsky's famous 1971 debate on human nature with the French philosopher Michel Foucault was a symbolic clash of the analytic and continental philosophy traditions, represented by Chomsky and Foucault, respectively. It showed what appeared to be irreconcilable differences between two moral and intellectual luminaries of the 20th century. Foucault held that any definition of human nature is connected to our present-day conceptions of ourselves; Chomsky held that human nature contained universals such as a common standard of moral justice as deduced through reason. Chomsky criticized postmodernism and French philosophy generally, arguing that the obscure language of postmodern, leftist philosophers gives little aid to the working classes. He has also debated analytic philosophers, including Tyler Burge, Donald Davidson, Michael Dummett, Saul Kripke, Thomas Nagel, Hilary Putnam, Willard Van Orman Quine, and John Searle. Chomsky's contributions span intellectual and world history, including the history of philosophy. Irony is a recurring characteristic of his writing, such as rhetorically implying that his readers already know something to be true, which engages the reader more actively in assessing the veracity of his claims. Personal life Chomsky endeavors to separate his family life, linguistic scholarship, and political activism from each other. An intensely private person, he is uninterested in appearances and the fame his work has brought him. McGilvray suggests that Chomsky is not motivated by a desire for fame, but impelled to tell what he perceives as the truth and a desire to aid others in doing so. Chomsky acknowledges that his income affords him a privileged life compared to the majority of the world's population; nevertheless, he characterizes himself as a "worker", albeit one who uses his intellect as his employable skill. He reads four or five newspapers daily; in the U.S., he subscribes to The Boston Globe, The New York Times, The Wall Street Journal, Financial Times, and The Christian Science Monitor. Chomsky is not religious but has expressed approval of forms of religion such as liberation theology. Chomsky is known to use charged language ("corrupt", "fascist", "fraudulent") when describing established political and academic figures, which can polarize his audience but is in keeping with his belief that much scholarship is self-serving. His colleague Steven Pinker has said that Chomsky "portrays people who disagree with him as stupid or evil, using withering scorn in his rhetoric", and that this contributes to the extreme reactions he receives. Chomsky avoids academic conferences, including left-oriented ones such as the Socialist Scholars Conference, preferring to speak to activist groups or hold university seminars for mass audiences. His approach to academic freedom has led him to support MIT academics whose actions he deplores; in 1969, when Chomsky heard that Walt Rostow, a major architect of the Vietnam war, wanted to return to work at MIT, Chomsky threatened "to protest publicly" if Rostow were denied a position at MIT. In 1989, when Pentagon adviser John Deutch applied to be president of MIT, Chomsky supported his candidacy. Later, when Deutch became head of the CIA, The New York Times quoted Chomsky as saying, "He has more honesty and integrity than anyone I've ever met. ... If somebody's got to be running the CIA, I'm glad it's him." Chomsky was married to Carol Doris (née Schatz) from 1949 until her death in 2008. They had three children together: Aviva (b. 1957), Diane (b. 1960), and Harry (b. 1967). In 2014, Chomsky married Valeria Wasserman, a translator for the Institute for Advanced Studies [pt] at the University of São Paulo. They have owned a home in Wasserman's native country, Brazil, since 2015. Judith Chomsky and Marvin J. Chomsky are cousins of Noam Chomsky. In 2023, Chomsky suffered a massive stroke and was flown to a hospital in São Paulo, Brazil, to recuperate. He can no longer walk or communicate, making his return to public life improbable, but he continues to follow current events such as the Gaza war. He was discharged in June 2024 to continue his recovery at home. The same month, Chomsky trended on social media amid false reports of his death. Periodicals retracted premature obituaries. As of November 2025, Chomsky is reportedly still convalescing in Brazil. Emails related to the activities of convicted child sex offender Jeffrey Epstein released by the House Oversight Committee in November 2025 revealed that Chomsky befriended him after Epstein's 2008 conviction and remained in touch with him at least through 2017. In a letter, he wrote that he considered Epstein a "highly valued friend and regular source of intellectual exchange and stimulation". In December 2025, Congress released a photo of Chomsky with Steve Bannon from Epstein's estate and another showing him flying with Epstein in Epstein's private plane. Before the files' release, he had said he received around $270,000 from an account connected to Epstein while sorting through common funds after his wife Carol's death. In 2016, Epstein invited Chomsky and his wife, Valeria, to meet in either New York or the Caribbean. Chomsky replied: "Valeria's always keen on New York. I'm really fantasizing about the Caribbean island." In 2019, Epstein referenced advice he said Chomsky had given him on handling media scrutiny after his 2008 plea deal: "The best way to proceed is to ignore it ... That's particularly true now with the hysteria that has developed about abuse of women, which has reached the point that even questioning a charge is a crime worse than murder." Chomsky consulted Epstein for guidance on drafting an email to his financial advisor about a $187,000 payment that had already been disbursed. Chomsky also contacted Bannon using an email address Epstein provided. In 2026, Valeria Chomsky wrote that Chomsky's relationship with Epstein was a "grave mistake" and apologized on Chomsky's behalf, writing, "It was deeply disturbing for both of us to realize we had engaged with someone who presented as a helpful friend but led a hidden life of criminal, inhumane, and perverted acts." The couple's financial situation related to retirement income played a part in the two money transactions with Epstein, according to reporting in both The Guardian and The Jerusalem Post. Reception and influence [Chomsky's] voice is heard in academia beyond linguistics and philosophy: from computer science to neuroscience, from anthropology to education, mathematics and literary criticism. If we include Chomsky's political activism then the boundaries become quite blurred, and it comes as no surprise that Chomsky is increasingly seen as enemy number one by those who inhabit that wide sphere of reactionary discourse and action. Chomsky has been a defining Western intellectual figure, central to the field of linguistics and definitive in cognitive science, computer science, philosophy, and psychology. In addition to being known as one of the most important intellectuals of his time,[g] Chomsky has a dual legacy as a leader and luminary in both linguistics and the realm of political dissent. Despite his academic success, his political viewpoints and activism have resulted in his being distrusted by mainstream media, and he is regarded as being "on the outer margin of acceptability". Chomsky's public image and social reputation often color his work's public reception. McGilvray observes that Chomsky inaugurated the "cognitive revolution" in linguistics, and that he is largely responsible for establishing the field as a formal, natural science, moving it away from the procedural form of structural linguistics dominant during the mid-20th century. As such, some have called Chomsky "the father of modern linguistics".[b] Linguist John Lyons further remarked that within a few decades of publication, Chomskyan linguistics had become "the most dynamic and influential" school of thought in the field. By the 1970s his work had also come to exert a considerable influence on philosophy, and a Minnesota State University Moorhead poll ranked Syntactic Structures as the single most important work in cognitive science. In addition, his work in automata theory and the Chomsky hierarchy have become well known in computer science, and he is much cited in computational linguistics. Chomsky's criticisms of behaviorism contributed substantially to the decline of behaviorist psychology; in addition, he is generally regarded as one of the primary founders of the field of cognitive science. Some arguments in evolutionary psychology are derived from his research results; Nim Chimpsky, a chimpanzee who was the subject of a study in animal language acquisition at Columbia University, was named after Chomsky in reference to his view of language acquisition as a uniquely human ability. ACM Turing Award winner Donald Knuth credited Chomsky's work with helping him combine his interests in mathematics, linguistics, and computer science. IBM computer scientist John Backus, another Turing Award winner, used some of Chomsky's concepts to help him develop FORTRAN, the first widely used high-level computer programming language. Chomsky's theory of generative grammar has also influenced work in music theory and analysis, such as Fred Lerdahl's and Ray Jackendoff's generative theory of tonal music. Chomsky is among the most cited authors living or dead.[h] He was cited within the Arts and Humanities Citation Index more often than any other living scholar from 1980 to 1992. Chomsky was also extensively cited in the Social Sciences Citation Index and Science Citation Index during the same period. The librarian who conducted the research said that the statistics show that "he is very widely read across disciplines and that his work is used by researchers across disciplines ... it seems that you can't write a paper without citing Noam Chomsky." As a result of his influence, there are dueling camps of Chomskyan and non-Chomskyan linguistics. Their disputes are often acrimonious. Additionally, according to journalist Maya Jaggi, Chomsky is among the most quoted sources in the humanities, ranking alongside Karl Marx, William Shakespeare and the Bible. Chomsky's status as the "most-quoted living author" is credited to his political writings, which vastly outnumber his writings on linguistics. Chomsky biographer Wolfgang B. Sperlich characterizes him as "one of the most notable contemporary champions of the people"; journalist John Pilger has described him as a "genuine people's hero; an inspiration for struggles all over the world for that basic decency known as freedom. To a lot of people in the margins—activists and movements—he's unfailingly supportive." Arundhati Roy has called him "one of the greatest, most radical public thinkers of our time", and Edward Said thought him "one of the most significant challengers of unjust power and delusions". Fred Halliday has said that by the start of the 21st century Chomsky had become a "guru" for the world's anti-capitalist and anti-imperialist movements. The propaganda model of media criticism that he and Herman developed has been widely accepted in radical media critiques and adopted to some level in mainstream criticism of the media, also exerting a significant influence on the growth of alternative media, including radio, publishers, and the Internet, which in turn have helped to disseminate his work. Despite this broad influence, university departments devoted to history and political science rarely include Chomsky's work on their undergraduate syllabi. Critics have argued that despite publishing widely on social and political issues, Chomsky has no formal expertise in these areas; he has responded that such issues are not as complex as many social scientists claim and that almost everyone is able to comprehend them regardless of whether they have been academically trained to do so. Some have responded to these criticisms by questioning the critics' motives and their understanding of Chomsky's ideas. Sperlich, for instance, says that Chomsky has been vilified by corporate interests, particularly in the mainstream press. Likewise, according to McGilvray, many of Chomsky's critics "do not bother quoting his work or quote out of context, distort, and create straw men that cannot be supported by Chomsky's text". Chomsky drew criticism for not calling the Bosnian War's Srebrenica massacre a "genocide". While he did not deny the fact of the massacre, which he called "a horror story and major crime", he felt the massacre did not meet the definition of genocide. Critics have accused Chomsky of denying the Bosnian genocide. Chomsky's far-reaching criticisms of U.S. foreign policy and the legitimacy of U.S. power have raised controversy. A document obtained pursuant to a Freedom of Information Act (FOIA) request from the U.S. government revealed that the Central Intelligence Agency (CIA) monitored his activities and for years denied doing so. The CIA also destroyed its files on Chomsky at some point, possibly in violation of federal law. He has often received undercover police protection at MIT and when speaking on the Middle East but has refused uniformed police protection. German news magazine Der Spiegel described Chomsky as "the Ayatollah of anti-American hatred", while American conservative commentator David Horowitz called him "the most devious, the most dishonest and ... the most treacherous intellect in America", whose work is infused with "anti-American dementia" and evidences his "pathological hatred of his own country". Chomsky's criticism of Israel has led to his being called a traitor to the Jewish people and an antisemite. Criticizing Chomsky's defense of the right of individuals to engage in Holocaust denial on the grounds that freedom of speech must be extended to all viewpoints, Werner Cohn called Chomsky "the most important patron" of the neo-Nazi movement. The Anti-Defamation League (ADL) called him a Holocaust denier, describing him as a "dupe of intellectual pride so overweening that he is incapable of making distinctions between totalitarian and democratic societies, between oppressors and victims". In turn, Chomsky has claimed that the ADL is dominated by "Stalinist types" who oppose democracy in Israel. The lawyer Alan Dershowitz has called Chomsky a "false prophet of the left"; Chomsky called Dershowitz "a complete liar" who is on "a crazed jihad, dedicating much of his life to trying to destroy my reputation". In early 2016, President Recep Tayyip Erdoğan of Turkey publicly rebuked Chomsky after he signed an open letter condemning Erdoğan for his anti-Kurdish repression and double standards on terrorism. Chomsky accused Erdoğan of hypocrisy, noting that Erdoğan supports al-Qaeda's Syrian affiliate, the al-Nusra Front. In 1970, the London Times named Chomsky one of the "makers of the twentieth century". He was voted the world's leading public intellectual in The 2005 Global Intellectuals Poll jointly conducted by American magazine Foreign Policy and British magazine Prospect. New Statesman readers listed Chomsky among the world's foremost heroes in 2006. In 2011, the US Peace Memorial Foundation awarded The US Peace Prize to Chomsky, "whose antiwar activities for five decades both educate and inspire". In the United States he is a Member of the National Academy of Sciences, the American Academy of Arts and Sciences, the Linguistic Society of America, the American Association for the Advancement of Science, the American Philosophical Association, and the American Philosophical Society. Abroad he is a corresponding fellow of the British Academy, an honorary member of the British Psychological Society, a member of the Deutsche Akademie der Naturforscher Leopoldina, and a foreign member of the Department of Social Sciences of the Serbian Academy of Sciences and Arts. He received a 1971 Guggenheim Fellowship, the 1984 American Psychological Association Award for Distinguished Contributions to Psychology, the 1988 Kyoto Prize in Basic Sciences, the 1996 Helmholtz Medal, the 1999 Benjamin Franklin Medal in Computer and Cognitive Science, the 2010 Erich Fromm Prize, and the British Academy's 2014 Neil and Saras Smith Medal for Linguistics. He is also a two-time winner of the NCTE George Orwell Award for Distinguished Contribution to Honesty and Clarity in Public Language (1987 and 1989). He has also received the Rabindranath Tagore Centenary Award from The Asiatic Society. Chomsky received the 2004 Carl-von-Ossietzky Prize from the city of Oldenburg, Germany, to acknowledge his body of work as a political analyst and media critic. He received an honorary fellowship in 2005 from the Literary and Historical Society of University College Dublin. He received the 2008 President's Medal from the Literary and Debating Society of the National University of Ireland, Galway. Since 2009, he has been an honorary member of International Association of Professional Translators and Interpreters (IAPTI). He received the University of Wisconsin's A.E. Havens Center's Award for Lifetime Contribution to Critical Scholarship and was inducted into IEEE Intelligent Systems' AI's Hall of Fame for "significant contributions to the field of AI and intelligent systems". Chomsky has an Erdős number of four. In 2011, the US Peace Memorial Foundation awarded Chomsky the US Peace Prize for anti-war activities over five decades. For his work in human rights, peace, and social criticism, he received the 2011 Sydney Peace Prize, the Sretenje Order in 2015, the 2017 Seán MacBride Peace Prize and the Dorothy Eldridge Peacemaker Award. Chomsky has received honorary doctorates from institutions including the University of London and the University of Chicago (1967), Loyola University Chicago and Swarthmore College (1970), Bard College (1971), Delhi University (1972), the University of Massachusetts (1973), and the International School for Advanced Studies (2012). Public lectures given by Chomsky include the 1969 John Locke Lectures, 1975 Whidden Lectures, 1977 Huizinga Lecture, and 1988 Massey Lectures. Various tributes to Chomsky have been dedicated over the years. He is the eponym for a bee species, a frog species, an asteroid, and a building complex at the Indian university Jamia Millia Islamia. Actor Viggo Mortensen and avant-garde guitarist Buckethead dedicated their 2003 album Pandemoniumfromamerica to Chomsky. Selected bibliography Linguistics Politics See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Word_n-gram_language_model#Skip-gram_language_model] | [TOKENS: 2918]
Contents Word n-gram language model A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Unigram model A special case, where n = 1, is called a unigram model. Probability of each word in a sequence is independent from probabilities of other word in the sequence. Each word's probability in the sequence is equal to the word's probability in an entire document. P uni ( t 1 t 2 t 3 ) = P ( t 1 ) P ( t 2 ) P ( t 3 ) . {\displaystyle P_{\text{uni}}(t_{1}t_{2}t_{3})=P(t_{1})P(t_{2})P(t_{3}).} The model consists of units, each treated as one-state finite automata. Words with their probabilities in a document can be illustrated as follows. Total mass of word probabilities distributed across the document's vocabulary, is 1. ∑ word in doc P ( word ) = 1 {\displaystyle \sum _{\text{word in doc}}P({\text{word}})=1} The probability generated for a specific query is calculated as P ( query ) = ∏ word in query P ( word ) {\displaystyle P({\text{query}})=\prod _{\text{word in query}}P({\text{word}})} Unigram models of different documents have different probabilities of words in it. The probability distributions from different documents are used to generate hit probabilities for each query. Documents can be ranked for a query according to the probabilities. Example of unigram models of two documents: Bigram model In a bigram word (n = 2) language model, the probability of the sentence I saw the red house is approximated as P ( I, saw, the, red, house ) ≈ P ( I ∣ ⟨ s ⟩ ) P ( saw ∣ I ) P ( the ∣ saw ) P ( red ∣ the ) P ( house ∣ red ) P ( ⟨ / s ⟩ ∣ house ) {\displaystyle P({\text{I, saw, the, red, house}})\approx P({\text{I}}\mid \langle s\rangle )P({\text{saw}}\mid {\text{I}})P({\text{the}}\mid {\text{saw}})P({\text{red}}\mid {\text{the}})P({\text{house}}\mid {\text{red}})P(\langle /s\rangle \mid {\text{house}})} Trigram model In a trigram (n = 3) language model, the approximation is P ( I, saw, the, red, house ) ≈ P ( I ∣ ⟨ s ⟩ , ⟨ s ⟩ ) P ( saw ∣ ⟨ s ⟩ , I ) P ( the ∣ I, saw ) P ( red ∣ saw, the ) P ( house ∣ the, red ) P ( ⟨ / s ⟩ ∣ red, house ) {\displaystyle P({\text{I, saw, the, red, house}})\approx P({\text{I}}\mid \langle s\rangle ,\langle s\rangle )P({\text{saw}}\mid \langle s\rangle ,I)P({\text{the}}\mid {\text{I, saw}})P({\text{red}}\mid {\text{saw, the}})P({\text{house}}\mid {\text{the, red}})P(\langle /s\rangle \mid {\text{red, house}})} Note that the context of the first n – 1 n-grams is filled with start-of-sentence markers, typically denoted <s>. Additionally, without an end-of-sentence marker, the probability of an ungrammatical sequence *I saw the would always be higher than that of the longer sentence I saw the red house. Approximation method The approximation method calculates the probability P ( w 1 , … , w m ) {\displaystyle P(w_{1},\ldots ,w_{m})} of observing the sentence w 1 , … , w m {\displaystyle w_{1},\ldots ,w_{m}} P ( w 1 , … , w m ) = ∏ i = 1 m P ( w i ∣ w 1 , … , w i − 1 ) ≈ ∏ i = 2 m P ( w i ∣ w i − ( n − 1 ) , … , w i − 1 ) {\displaystyle P(w_{1},\ldots ,w_{m})=\prod _{i=1}^{m}P(w_{i}\mid w_{1},\ldots ,w_{i-1})\approx \prod _{i=2}^{m}P(w_{i}\mid w_{i-(n-1)},\ldots ,w_{i-1})} It is assumed that the probability of observing the ith word wi (in the context window consisting of the preceding i − 1 words) can be approximated by the probability of observing it in the shortened context window consisting of the preceding n − 1 words (nth-order Markov property). To clarify, for n = 3 and i = 2 we have P ( w i ∣ w i − ( n − 1 ) , … , w i − 1 ) = P ( w 2 ∣ w 1 ) {\displaystyle P(w_{i}\mid w_{i-(n-1)},\ldots ,w_{i-1})=P(w_{2}\mid w_{1})} . The conditional probability can be calculated from n-gram model frequency counts: P ( w i ∣ w i − ( n − 1 ) , … , w i − 1 ) = c o u n t ( w i − ( n − 1 ) , … , w i − 1 , w i ) c o u n t ( w i − ( n − 1 ) , … , w i − 1 ) {\displaystyle P(w_{i}\mid w_{i-(n-1)},\ldots ,w_{i-1})={\frac {\mathrm {count} (w_{i-(n-1)},\ldots ,w_{i-1},w_{i})}{\mathrm {count} (w_{i-(n-1)},\ldots ,w_{i-1})}}} An issue when using n-gram language models are out-of-vocabulary (OOV) words. They are encountered in computational linguistics and natural language processing when the input includes words which were not present in a system's dictionary or database during its preparation. By default, when a language model is estimated, the entire observed vocabulary is used. In some cases, it may be necessary to estimate the language model with a specific fixed vocabulary. In such a scenario, the n-grams in the corpus that contain an out-of-vocabulary word are ignored. The n-gram probabilities are smoothed over all the words in the vocabulary even if they were not observed. Nonetheless, it is essential in some cases to explicitly model the probability of out-of-vocabulary words by introducing a special token (e.g. <unk>) into the vocabulary. Out-of-vocabulary words in the corpus are effectively replaced with this special <unk> token before n-grams counts are cumulated. With this option, it is possible to estimate the transition probabilities of n-grams involving out-of-vocabulary words. n-grams for approximate matching n-grams were also used for approximate matching. If we convert strings (with only letters in the English alphabet) into character 3-grams, we get a 26 3 {\displaystyle 26^{3}} -dimensional space (the first dimension measures the number of occurrences of "aaa", the second "aab", and so forth for all possible combinations of three letters). Using this representation, we lose information about the string. However, we know empirically that if two strings of real text have a similar vector representation (as measured by cosine distance) then they are likely to be similar. Other metrics have also been applied to vectors of n-grams with varying, sometimes better, results. For example, z-scores have been used to compare documents by examining how many standard deviations each n-gram differs from its mean occurrence in a large collection, or text corpus, of documents (which form the "background" vector). In the event of small counts, the g-score (also known as g-test) gave better results. It is also possible to take a more principled approach to the statistics of n-grams, modeling similarity as the likelihood that two strings came from the same source directly in terms of a problem in Bayesian inference. n-gram-based searching was also used for plagiarism detection. Bias–variance tradeoff To choose a value for n in an n-gram model, it is necessary to find the right trade-off between the stability of the estimate against its appropriateness. This means that trigram (i.e. triplets of words) is a common choice with large training corpora (millions of words), whereas a bigram is often used with smaller ones. There are problems of balance weight between infrequent grams (for example, if a proper name appeared in the training data) and frequent grams. Also, items not seen in the training data will be given a probability of 0.0 without smoothing. For unseen but plausible data from a sample, one can introduce pseudocounts. Pseudocounts are generally motivated on Bayesian grounds. In practice it was necessary to smooth the probability distributions by also assigning non-zero probabilities to unseen words or n-grams. The reason is that models derived directly from the n-gram frequency counts have severe problems when confronted with any n-grams that have not explicitly been seen before – the zero-frequency problem. Various smoothing methods were used, from simple "add-one" (Laplace) smoothing (assign a count of 1 to unseen n-grams; see Rule of succession) to more sophisticated models, such as Good–Turing discounting or back-off models. Some of these methods are equivalent to assigning a prior distribution to the probabilities of the n-grams and using Bayesian inference to compute the resulting posterior n-gram probabilities. However, the more sophisticated smoothing models were typically not derived in this fashion, but instead through independent considerations. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Syntactic n-grams Syntactic n-grams are n-grams defined by paths in syntactic dependency or constituent trees rather than the linear structure of the text. For example, the sentence "economic news has little effect on financial markets" can be transformed to syntactic n-grams following the tree structure of its dependency relations: news-economic, effect-little, effect-on-markets-financial. Syntactic n-grams are intended to reflect syntactic structure more faithfully than linear n-grams, and have many of the same applications, especially as features in a vector space model. Syntactic n-grams for certain tasks gives better results than the use of standard n-grams, for example, for authorship attribution. Another type of syntactic n-grams are part-of-speech n-grams, defined as fixed-length contiguous overlapping subsequences that are extracted from part-of-speech sequences of text. Part-of-speech n-grams have several applications, most commonly in information retrieval. Other applications n-grams find use in several areas of computer science, computational linguistics, and applied mathematics. They have been used to: See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Regularization_(mathematics)] | [TOKENS: 6846]
Contents Regularization (mathematics) In mathematics, statistics, finance, and computer science, particularly in machine learning and inverse problems, regularization is a process that converts the answer to a problem to a simpler one. It is often used in solving ill-posed problems or to prevent overfitting. Although regularization procedures can be divided in many ways, the following delineation is particularly helpful: In explicit regularization, independent of the problem or model, there is always a data term, that corresponds to a likelihood of the measurement, and a regularization term that corresponds to a prior. By combining both using Bayesian statistics, one can compute a posterior, that includes both information sources and therefore stabilizes the estimation process. By trading off both objectives, one chooses to be more aligned to the data or to enforce regularization (to prevent overfitting). There is a whole research branch dealing with all possible regularizations. In practice, one usually tries a specific regularization and then figures out the probability density that corresponds to that regularization to justify the choice. It can also be physically motivated by common sense or intuition. In machine learning, the data term corresponds to the training data and the regularization is either the choice of the model or modifications to the algorithm. It is always intended to reduce the generalization error, i.e. the error score with the trained model on the evaluation set (testing data) and not the training data. One of the earliest uses of regularization is Tikhonov regularization (ridge regression), related to the method of least squares. Regularization in machine learning In machine learning, a key challenge is enabling models to accurately predict outcomes on unseen data, not just on familiar training data. Regularization is crucial for addressing overfitting—where a model memorizes training data details but cannot generalize to new data. The goal of regularization is to encourage models to learn the broader patterns within the data rather than memorizing it. Techniques like early stopping, L1 and L2 regularization, and dropout are designed to prevent overfitting and underfitting, thereby enhancing the model's ability to adapt to and perform well with new data, thus improving model generalization. Stops training when validation performance deteriorates, preventing overfitting by halting before the model memorizes training data. Adds penalty terms to the cost function to discourage complex models: In the context of neural networks, the Dropout technique repeatedly ignores random subsets of neurons during training, which simulates the training of multiple neural network architectures at once to improve generalization. Classification Empirical learning of classifiers (from a finite data set) is always an underdetermined problem, because it attempts to infer a function of any x {\displaystyle x} given only examples x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} . A regularization term (or regularizer) R ( f ) {\displaystyle R(f)} is added to a loss function: min f ∑ i = 1 n V ( f ( x i ) , y i ) + λ R ( f ) {\displaystyle \min _{f}\sum _{i=1}^{n}V(f(x_{i}),y_{i})+\lambda R(f)} where V {\displaystyle V} is an underlying loss function that describes the cost of predicting f ( x ) {\displaystyle f(x)} when the label is y {\displaystyle y} , such as the square loss or hinge loss; and λ {\displaystyle \lambda } is a parameter which controls the importance of the regularization term. R ( f ) {\displaystyle R(f)} is typically chosen to impose a penalty on the complexity of f {\displaystyle f} . Concrete notions of complexity used include restrictions for smoothness and bounds on the vector space norm.[page needed] A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution (as depicted in the figure above, where the green function, the simpler one, may be preferred). From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters. Regularization can serve multiple purposes, including learning simpler models, inducing models to be sparse and introducing group structure[clarification needed] into the learning problem. The same idea arose in many fields of science. A simple form of regularization applied to integral equations (Tikhonov regularization) is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, including total variation regularization, have become popular. Regularization can be motivated as a technique to improve the generalizability of a learned model. The goal of this learning problem is to find a function that fits or predicts the outcome (label) that minimizes the expected error over all possible inputs and labels. The expected error of a function f n {\displaystyle f_{n}} is: I [ f n ] = ∫ X × Y V ( f n ( x ) , y ) ρ ( x , y ) d x d y {\displaystyle I[f_{n}]=\int _{X\times Y}V(f_{n}(x),y)\rho (x,y)\,dx\,dy} where X {\displaystyle X} and Y {\displaystyle Y} are the domains of input data x {\displaystyle x} and their labels y {\displaystyle y} respectively. Typically in learning problems, only a subset of input data and labels are available, measured with some noise. Therefore, the expected error is unmeasurable, and the best surrogate available is the empirical error over the N {\displaystyle N} available samples: I S [ f n ] = 1 n ∑ i = 1 N V ( f n ( x ^ i ) , y ^ i ) {\displaystyle I_{S}[f_{n}]={\frac {1}{n}}\sum _{i=1}^{N}V(f_{n}({\hat {x}}_{i}),{\hat {y}}_{i})} Without bounds on the complexity of the function space (formally, the reproducing kernel Hilbert space) available, a model will be learned that incurs zero loss on the surrogate empirical error. If measurements (e.g. of x i {\displaystyle x_{i}} ) were made with noise, this model may suffer from overfitting and display poor expected error. Regularization introduces a penalty for exploring certain regions of the function space used to build the model, which can improve generalization. Tikhonov regularization (ridge regression) These techniques are named for Andrey Nikolayevich Tikhonov, who applied regularization to integral equations and made important contributions in many other areas. When learning a linear function f {\displaystyle f} , characterized by an unknown vector w {\displaystyle w} such that f ( x ) = w ⋅ x {\displaystyle f(x)=w\cdot x} , one can add the L 2 {\displaystyle L_{2}} -norm of the vector w {\displaystyle w} to the loss expression in order to prefer solutions with smaller norms. Tikhonov regularization is one of the most common forms. It is also known as ridge regression. It is expressed as: min w ∑ i = 1 n V ( x ^ i ⋅ w , y ^ i ) + λ ‖ w ‖ 2 2 , {\displaystyle \min _{w}\sum _{i=1}^{n}V({\hat {x}}_{i}\cdot w,{\hat {y}}_{i})+\lambda \left\|w\right\|_{2}^{2},} where ( x ^ i , y ^ i ) , 1 ≤ i ≤ n , {\displaystyle ({\hat {x}}_{i},{\hat {y}}_{i}),\,1\leq i\leq n,} would represent samples used for training. In the case of a general function, the norm of the function in its reproducing kernel Hilbert space is: min f ∑ i = 1 n V ( f ( x ^ i ) , y ^ i ) + λ ‖ f ‖ H 2 {\displaystyle \min _{f}\sum _{i=1}^{n}V(f({\hat {x}}_{i}),{\hat {y}}_{i})+\lambda \left\|f\right\|_{\mathcal {H}}^{2}} As the L 2 {\displaystyle L_{2}} norm is differentiable, learning can be advanced by gradient descent. The learning problem with the least squares loss function and Tikhonov regularization can be solved analytically. Written in matrix form, the optimal w {\displaystyle w} is the one for which the gradient of the loss function with respect to w {\displaystyle w} is 0. min w 1 n ( X ^ w − Y ) T ( X ^ w − Y ) + λ ‖ w ‖ 2 2 {\displaystyle \min _{w}{\frac {1}{n}}\left({\hat {X}}w-Y\right)^{\mathsf {T}}\left({\hat {X}}w-Y\right)+\lambda \left\|w\right\|_{2}^{2}} ∇ w = 2 n X ^ T ( X ^ w − Y ) + 2 λ w {\displaystyle \nabla _{w}={\frac {2}{n}}{\hat {X}}^{\mathsf {T}}\left({\hat {X}}w-Y\right)+2\lambda w} 0 = X ^ T ( X ^ w − Y ) + n λ w {\displaystyle 0={\hat {X}}^{\mathsf {T}}\left({\hat {X}}w-Y\right)+n\lambda w} w = ( X ^ T X ^ + λ n I ) − 1 ( X ^ T Y ) {\displaystyle w=\left({\hat {X}}^{\mathsf {T}}{\hat {X}}+\lambda nI\right)^{-1}\left({\hat {X}}^{\mathsf {T}}Y\right)} where the third statement is a first-order condition. By construction of the optimization problem, other values of w {\displaystyle w} give larger values for the loss function. This can be verified by examining the second derivative ∇ w w {\displaystyle \nabla _{ww}} . During training, this algorithm takes O ( d 3 + n d 2 ) {\displaystyle O(d^{3}+nd^{2})} time. The terms correspond to the matrix inversion and calculating X T X {\displaystyle X^{\mathsf {T}}X} , respectively. Testing takes O ( n d ) {\displaystyle O(nd)} time. Early stopping Early stopping can be viewed as regularization in time. Intuitively, a training procedure such as gradient descent tends to learn more and more complex functions with increasing iterations. By regularizing for time, model complexity can be controlled, improving generalization. Early stopping is implemented using one data set for training, one statistically independent data set for validation and another for testing. The model is trained until performance on the validation set no longer improves and then applied to the test set. Consider the finite approximation of Neumann series for an invertible matrix A where ‖ I − A ‖ < 1 {\displaystyle \left\|I-A\right\|<1} : ∑ i = 0 T − 1 ( I − A ) i ≈ A − 1 {\displaystyle \sum _{i=0}^{T-1}\left(I-A\right)^{i}\approx A^{-1}} This can be used to approximate the analytical solution of unregularized least squares, if γ is introduced to ensure the norm is less than one. w T = γ n ∑ i = 0 T − 1 ( I − γ n X ^ T X ^ ) i X ^ T Y ^ {\displaystyle w_{T}={\frac {\gamma }{n}}\sum _{i=0}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}} The exact solution to the unregularized least squares learning problem minimizes the empirical error, but may fail. By limiting T, the only free parameter in the algorithm above, the problem is regularized for time, which may improve its generalization. The algorithm above is equivalent to restricting the number of gradient descent iterations for the empirical risk I s [ w ] = 1 2 n ‖ X ^ w − Y ^ ‖ R n 2 {\displaystyle I_{s}[w]={\frac {1}{2n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|_{\mathbb {R} ^{n}}^{2}} with the gradient descent update: w 0 = 0 w t + 1 = ( I − γ n X ^ T X ^ ) w t + γ n X ^ T Y ^ {\displaystyle {\begin{aligned}w_{0}&=0\\[1ex]w_{t+1}&=\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)w_{t}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\end{aligned}}} The base case is trivial. The inductive case is proved as follows: w T = ( I − γ n X ^ T X ^ ) γ n ∑ i = 0 T − 2 ( I − γ n X ^ T X ^ ) i X ^ T Y ^ + γ n X ^ T Y ^ = γ n ∑ i = 1 T − 1 ( I − γ n X ^ T X ^ ) i X ^ T Y ^ + γ n X ^ T Y ^ = γ n ∑ i = 0 T − 1 ( I − γ n X ^ T X ^ ) i X ^ T Y ^ {\displaystyle {\begin{aligned}w_{T}&=\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right){\frac {\gamma }{n}}\sum _{i=0}^{T-2}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\\[1ex]&={\frac {\gamma }{n}}\sum _{i=1}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\\[1ex]&={\frac {\gamma }{n}}\sum _{i=0}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\end{aligned}}} Regularizers for sparsity Assume that a dictionary ϕ j {\displaystyle \phi _{j}} with dimension p {\displaystyle p} is given such that a function in the function space can be expressed as: f ( x ) = ∑ j = 1 p ϕ j ( x ) w j {\displaystyle f(x)=\sum _{j=1}^{p}\phi _{j}(x)w_{j}} Enforcing a sparsity constraint on w {\displaystyle w} can lead to simpler and more interpretable models. This is useful in many real-life applications such as computational biology. An example is developing a simple predictive test for a disease in order to minimize the cost of performing medical tests while maximizing predictive power. A sensible sparsity constraint is the L 0 {\displaystyle L_{0}} norm ‖ w ‖ 0 {\displaystyle \|w\|_{0}} , defined as the number of non-zero elements in w {\displaystyle w} . Solving a L 0 {\displaystyle L_{0}} regularized learning problem, however, has been demonstrated to be NP-hard. The L 1 {\displaystyle L_{1}} norm (see also Norms) can be used to approximate the optimal L 0 {\displaystyle L_{0}} norm via convex relaxation. It can be shown that the L 1 {\displaystyle L_{1}} norm induces sparsity. In the case of least squares, this problem is known as LASSO in statistics and basis pursuit in signal processing. min w ∈ R p 1 n ‖ X ^ w − Y ^ ‖ 2 + λ ‖ w ‖ 1 {\displaystyle \min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|^{2}+\lambda \left\|w\right\|_{1}} L 1 {\displaystyle L_{1}} regularization can occasionally produce non-unique solutions. A simple example is provided in the figure when the space of possible solutions lies on a 45 degree line. This can be problematic for certain applications, and is overcome by combining L 1 {\displaystyle L_{1}} with L 2 {\displaystyle L_{2}} regularization in elastic net regularization, which takes the following form: min w ∈ R p 1 n ‖ X ^ w − Y ^ ‖ 2 + λ ( α ‖ w ‖ 1 + ( 1 − α ) ‖ w ‖ 2 2 ) , α ∈ [ 0 , 1 ] {\displaystyle \min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|^{2}+\lambda \left(\alpha \left\|w\right\|_{1}+(1-\alpha )\left\|w\right\|_{2}^{2}\right),\alpha \in [0,1]} Elastic net regularization tends to have a grouping effect, where correlated input features are assigned equal weights. Elastic net regularization is commonly used in practice and is implemented in many machine learning libraries. While the L 1 {\displaystyle L_{1}} norm does not result in an NP-hard problem, the L 1 {\displaystyle L_{1}} norm is convex but is not strictly differentiable due to the kink at x = 0. Subgradient methods which rely on the subderivative can be used to solve L 1 {\displaystyle L_{1}} regularized learning problems. However, faster convergence can be achieved through proximal methods. For a problem min w ∈ H F ( w ) + R ( w ) {\displaystyle \min _{w\in H}F(w)+R(w)} such that F {\displaystyle F} is convex, continuous, differentiable, with Lipschitz continuous gradient (such as the least squares loss function), and R {\displaystyle R} is convex, continuous, and proper, then the proximal method to solve the problem is as follows. First define the proximal operator prox R ⁡ ( v ) = argmin w ∈ R D ⁡ { R ( w ) + 1 2 ‖ w − v ‖ 2 } , {\displaystyle \operatorname {prox} _{R}(v)=\mathop {\operatorname {argmin} } _{w\in \mathbb {R} ^{D}}\left\{R(w)+{\frac {1}{2}}\left\|w-v\right\|^{2}\right\},} and then iterate w k + 1 = prox γ , R ⁡ ( w k − γ ∇ F ( w k ) ) {\displaystyle w_{k+1}=\mathop {\operatorname {prox} } _{\gamma ,R}\left(w_{k}-\gamma \nabla F(w_{k})\right)} The proximal method iteratively performs gradient descent and then projects the result back into the space permitted by R {\displaystyle R} . When R {\displaystyle R} is the L1 regularizer, the proximal operator is equivalent to the soft-thresholding operator, S λ ( v ) f ( n ) = { v i − λ , if v i > λ 0 , if v i ∈ [ − λ , λ ] v i + λ , if v i < − λ {\displaystyle S_{\lambda }(v)f(n)={\begin{cases}v_{i}-\lambda ,&{\text{if }}v_{i}>\lambda \\0,&{\text{if }}v_{i}\in [-\lambda ,\lambda ]\\v_{i}+\lambda ,&{\text{if }}v_{i}<-\lambda \end{cases}}} This allows for efficient computation. Groups of features can be regularized by a sparsity constraint, which can be useful for expressing certain prior knowledge into an optimization problem. In the case of a linear model with non-overlapping known groups, a regularizer can be defined: R ( w ) = ∑ g = 1 G ‖ w g ‖ 2 , {\displaystyle R(w)=\sum _{g=1}^{G}\left\|w_{g}\right\|_{2},} where ‖ w g ‖ 2 = ∑ j = 1 | G g | ( w g j ) 2 {\displaystyle \|w_{g}\|_{2}={\sqrt {\sum _{j=1}^{|G_{g}|}\left(w_{g}^{j}\right)^{2}}}} This can be viewed as inducing a regularizer over the L 2 {\displaystyle L_{2}} norm over members of each group followed by an L 1 {\displaystyle L_{1}} norm over groups. This can be solved by the proximal method, where the proximal operator is a block-wise soft-thresholding function: prox λ , R , g ⁡ ( w g ) = { ( 1 − λ ‖ w g ‖ 2 ) w g , if ‖ w g ‖ 2 > λ 0 , if ‖ w g ‖ 2 ≤ λ {\displaystyle \operatorname {prox} \limits _{\lambda ,R,g}(w_{g})={\begin{cases}\left(1-{\dfrac {\lambda }{\left\|w_{g}\right\|_{2}}}\right)w_{g},&{\text{if }}\left\|w_{g}\right\|_{2}>\lambda \\[1ex]0,&{\text{if }}\|w_{g}\|_{2}\leq \lambda \end{cases}}} The algorithm described for group sparsity without overlaps can be applied to the case where groups do overlap, in certain situations. This will likely result in some groups with all zero elements, and other groups with some non-zero and some zero elements. If it is desired to preserve the group structure, a new regularizer can be defined: R ( w ) = inf { ∑ g = 1 G ‖ w g ‖ 2 : w = ∑ g = 1 G w ¯ g } {\displaystyle R(w)=\inf \left\{\sum _{g=1}^{G}\|w_{g}\|_{2}:w=\sum _{g=1}^{G}{\bar {w}}_{g}\right\}} For each w g {\displaystyle w_{g}} , w ¯ g {\displaystyle {\bar {w}}_{g}} is defined as the vector such that the restriction of w ¯ g {\displaystyle {\bar {w}}_{g}} to the group g {\displaystyle g} equals w g {\displaystyle w_{g}} and all other entries of w ¯ g {\displaystyle {\bar {w}}_{g}} are zero. The regularizer finds the optimal disintegration of w {\displaystyle w} into parts. It can be viewed as duplicating all elements that exist in multiple groups. Learning problems with this regularizer can also be solved with the proximal method with a complication. The proximal operator cannot be computed in closed form, but can be effectively solved iteratively, inducing an inner iteration within the proximal method iteration. Regularizers for semi-supervised learning When labels are more expensive to gather than input examples, semi-supervised learning can be useful. Regularizers have been designed to guide learning algorithms to learn models that respect the structure of unsupervised training samples. If a symmetric weight matrix W {\displaystyle W} is given, a regularizer can be defined: R ( f ) = ∑ i , j w i j ( f ( x i ) − f ( x j ) ) 2 {\displaystyle R(f)=\sum _{i,j}w_{ij}\left(f(x_{i})-f(x_{j})\right)^{2}} If W i j {\displaystyle W_{ij}} encodes the result of some distance metric for points x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} , it is desirable that f ( x i ) ≈ f ( x j ) {\displaystyle f(x_{i})\approx f(x_{j})} . This regularizer captures this intuition, and is equivalent to: R ( f ) = f ¯ T L f ¯ {\displaystyle R(f)={\bar {f}}^{\mathsf {T}}L{\bar {f}}} where L = D − W {\displaystyle L=D-W} is the Laplacian matrix of the graph induced by W {\displaystyle W} . The optimization problem min f ∈ R m R ( f ) , m = u + l {\displaystyle \min _{f\in \mathbb {R} ^{m}}R(f),m=u+l} can be solved analytically if the constraint f ( x i ) = y i {\displaystyle f(x_{i})=y_{i}} is applied for all supervised samples. The labeled part of the vector f {\displaystyle f} is therefore obvious. The unlabeled part of f {\displaystyle f} is solved for by: min f u ∈ R u f T L f = min f u ∈ R u { f u T L u u f u + f l T L l u f u + f u T L u l f l } {\displaystyle \min _{f_{u}\in \mathbb {R} ^{u}}f^{\mathsf {T}}Lf=\min _{f_{u}\in \mathbb {R} ^{u}}\left\{f_{u}^{\mathsf {T}}L_{uu}f_{u}+f_{l}^{\mathsf {T}}L_{lu}f_{u}+f_{u}^{\mathsf {T}}L_{ul}f_{l}\right\}} ∇ f u = 2 L u u f u + 2 L u l Y {\displaystyle \nabla _{f_{u}}=2L_{uu}f_{u}+2L_{ul}Y} f u = L u u † ( L u l Y ) {\displaystyle f_{u}=L_{uu}^{\dagger }\left(L_{ul}Y\right)} The pseudo-inverse can be taken because L u l {\displaystyle L_{ul}} has the same range as L u u {\displaystyle L_{uu}} . Regularizers for multitask learning In the case of multitask learning, T {\displaystyle T} problems are considered simultaneously, each related in some way. The goal is to learn T {\displaystyle T} functions, ideally borrowing strength from the relatedness of tasks, that have predictive power. This is equivalent to learning the matrix W : T × D {\displaystyle W:T\times D} . R ( w ) = ∑ i = 1 D ‖ W ‖ 2 , 1 {\displaystyle R(w)=\sum _{i=1}^{D}\left\|W\right\|_{2,1}} This regularizer defines an L2 norm on each column and an L1 norm over all columns. It can be solved by proximal methods. R ( w ) = ‖ σ ( W ) ‖ 1 {\displaystyle R(w)=\left\|\sigma (W)\right\|_{1}} where σ ( W ) {\displaystyle \sigma (W)} is the eigenvalues in the singular value decomposition of W {\displaystyle W} . R ( f 1 ⋯ f T ) = ∑ t = 1 T ‖ f t − 1 T ∑ s = 1 T f s ‖ H k 2 {\displaystyle R(f_{1}\cdots f_{T})=\sum _{t=1}^{T}\left\|f_{t}-{\frac {1}{T}}\sum _{s=1}^{T}f_{s}\right\|_{H_{k}}^{2}} This regularizer constrains the functions learned for each task to be similar to the overall average of the functions across all tasks. This is useful for expressing prior information that each task is expected to share with each other task. An example is predicting blood iron levels measured at different times of the day, where each task represents an individual. R ( f 1 ⋯ f T ) = ∑ r = 1 C ∑ t ∈ I ( r ) ‖ f t − 1 I ( r ) ∑ s ∈ I ( r ) f s ‖ H k 2 {\displaystyle R(f_{1}\cdots f_{T})=\sum _{r=1}^{C}\sum _{t\in I(r)}\left\|f_{t}-{\frac {1}{I(r)}}\sum _{s\in I(r)}f_{s}\right\|_{H_{k}}^{2}} where I ( r ) {\displaystyle I(r)} is a cluster of tasks. This regularizer is similar to the mean-constrained regularizer, but instead enforces similarity between tasks within the same cluster. This can capture more complex prior information. This technique has been used to predict Netflix recommendations. A cluster would correspond to a group of people who share similar preferences. More generally than above, similarity between tasks can be defined by a function. The regularizer encourages the model to learn similar functions for similar tasks. R ( f 1 ⋯ f T ) = ∑ t , s = 1 , t ≠ s T ‖ f t − f s ‖ 2 M t s {\displaystyle R(f_{1}\cdots f_{T})=\sum _{t,s=1,t\neq s}^{\mathsf {T}}\left\|f_{t}-f_{s}\right\|^{2}M_{ts}} for a given symmetric similarity matrix M {\displaystyle M} . Other uses of regularization in statistics and machine learning Bayesian learning methods make use of a prior probability that (usually) gives lower probability to more complex models. Well-known model selection techniques include the Akaike information criterion (AIC), minimum description length (MDL), and the Bayesian information criterion (BIC). Alternative methods of controlling overfitting not involving regularization include cross-validation. Examples of applications of different methods of regularization to the linear model are: See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_note-Word_n-gram_language_model_compositionality-17] | [TOKENS: 1793]
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ⁡ ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Curse_of_dimensionality] | [TOKENS: 3571]
Contents Curse of dimensionality The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming. The curse generally refers to issues that arise when the number of datapoints is small (in a suitably defined sense) relative to the intrinsic dimension of the data. Dimensionally cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. In order to obtain a reliable result, the amount of data needed often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient. Domains In some problems, each variable can take one of several discrete values, or the range of possible values is divided to give a finite number of possibilities. Taking the variables together, a huge number of combinations of values must be considered. This effect is also known as the combinatorial explosion. Even in the simplest case of d {\displaystyle d} binary variables, the number of possible combinations already is 2 d {\displaystyle 2^{d}} , exponential in the dimensionality. Naively, each additional dimension doubles the effort needed to try all combinations. There is an exponential increase in volume associated with adding extra dimensions to a mathematical space. For example, 102 = 100 evenly spaced sample points suffice to sample a unit interval (try to visualize a "1-dimensional" cube, i.e. a line) with no more than 10−2 = 0.01 distance between points; an equivalent sampling of a 10-dimensional unit hypercube with a lattice that has a spacing of 10−2 = 0.01 between adjacent points would require 1020 = [(102)10] sample points. In general, with a spacing distance of 10−n the 10-dimensional hypercube appears to be a factor of 10n(10−1) = [(10n)10/(10n)] "larger" than the 1-dimensional hypercube, which is the unit interval. In the above example n = 2: when using a sampling distance of 0.01 the 10-dimensional hypercube appears to be 1018 "larger" than the unit interval. This effect is a combination of the combinatorics problems above and the distance function problems explained below. When solving dynamic optimization problems by numerical backward induction, the objective function must be computed for each combination of values. This is a significant obstacle when the dimension of the "state variable" is large. In machine learning problems that involve learning a "state-of-nature" from a finite number of data samples in a high-dimensional feature space with each feature having a range of possible values, typically an enormous amount of training data is required to ensure that there are several samples with each combination of values. In an abstract sense, as the number of features or dimensions grows, the amount of data we need to generalize accurately grows exponentially. A typical rule of thumb is that there should be at least 5 training examples for each dimension in the representation. In machine learning and insofar as predictive performance is concerned, the curse of dimensionality is used interchangeably with the peaking phenomenon, which is also known as Hughes phenomenon. This phenomenon states that with a fixed number of training samples, the average (expected) predictive power of a classifier or regressor first increases as the number of dimensions or features used is increased but beyond a certain dimensionality it starts deteriorating instead of improving steadily. Nevertheless, in the context of a simple classifier (e.g., linear discriminant analysis in the multivariate Gaussian model under the assumption of a common known covariance matrix), Zollanvari, et al., showed both analytically and empirically that as long as the relative cumulative efficacy of an additional feature set (with respect to features that are already part of the classifier) is greater (or less) than the size of this additional feature set, the expected error of the classifier constructed using these additional features will be less (or greater) than the expected error of the classifier constructed without them. In other words, both the size of additional features and their (relative) cumulative discriminatory effect are important in observing a decrease or increase in the average predictive power. In metric learning, higher dimensions can sometimes allow a model to achieve better performance. After normalizing embeddings to the surface of a hypersphere, FaceNet achieves the best performance using 128 dimensions as opposed to 64, 256, or 512 dimensions in one ablation study. A loss function for unitary-invariant dissimilarity between word embeddings was found to be minimized in high dimensions. In data mining, the curse of dimensionality refers to a data set with too many features. Consider the first table, which depicts 200 individuals and 2000 genes (features) with a 1 or 0 denoting whether or not they have a genetic mutation in that gene. A data mining application to this data set may be finding the correlation between specific genetic mutations and creating a classification algorithm such as a decision tree to determine whether an individual has cancer or not. A common practice of data mining in this domain would be to create association rules between genetic mutations that lead to the development of cancers. To do this, one would have to loop through each genetic mutation of each individual and find other genetic mutations that occur over a desired threshold and create pairs. They would start with pairs of two, then three, then four until they result in an empty set of pairs. The complexity of this algorithm can lead to calculating all permutations of gene pairs for each individual or row. Given the formula for calculating the permutations of n items with a group size of r is: n ! ( n − r ) ! {\displaystyle {\frac {n!}{(n-r)!}}} , calculating the number of three pair permutations of any given individual would be 7988004000 different pairs of genes to evaluate for each individual. The number of pairs created will grow by an order of factorial as the size of the pairs increase. The growth is depicted in the permutation table (see right). As we can see from the permutation table above, one of the major problems data miners face regarding the curse of dimensionality is that the space of possible parameter values grows exponentially or factorially as the number of features in the data set grows. This problem critically affects both computational time and space when searching for associations or optimal features to consider. Another problem data miners may face when dealing with too many features is that the number of false predictions or classifications tends to increase as the number of features grows in the data set. In terms of the classification problem discussed above, keeping every data point could lead to a higher number of false positives and false negatives in the model. This may seem counterintuitive, but consider the genetic mutation table from above, depicting all genetic mutations for each individual. Each genetic mutation, whether they correlate with cancer or not, will have some input or weight in the model that guides the decision-making process of the algorithm. There may be mutations that are outliers or ones that dominate the overall distribution of genetic mutations when in fact they do not correlate with cancer. These features may be working against one's model, making it more difficult to obtain optimal results. This problem is up to the data miner to solve, and there is no universal solution. The first step any data miner should take is to explore the data, in an attempt to gain an understanding of how it can be used to solve the problem. One must first understand what the data means, and what they are trying to discover before they can decide if anything must be removed from the data set. Then they can create or use a feature selection or dimensionality reduction algorithm to remove samples or features from the data set if they deem it necessary. One example of such methods is the interquartile range method, used to remove outliers in a data set by calculating the standard deviation of a feature or occurrence. When a measure such as a Euclidean distance is defined using many coordinates, there is little difference in the distances between different pairs of points. One way to illustrate the "vastness" of high-dimensional Euclidean space is to compare the proportion of an inscribed hypersphere with radius r {\displaystyle r} and dimension d {\displaystyle d} , to that of a hypercube with edges of length 2 r . {\displaystyle 2r.} The volume of such a sphere is 2 r d π d / 2 d Γ ( d / 2 ) {\displaystyle {\frac {2r^{d}\pi ^{d/2}}{d\;\Gamma (d/2)}}} , where Γ {\displaystyle \Gamma } is the gamma function, while the volume of the cube is ( 2 r ) d {\displaystyle (2r)^{d}} . As the dimension d {\displaystyle d} of the space increases, the hypersphere becomes an insignificant volume relative to that of the hypercube. This can clearly be seen by comparing the proportions as the dimension d {\displaystyle d} goes to infinity: Furthermore, the distance between the center and the corners is r d {\displaystyle r{\sqrt {d}}} , which increases without bound for fixed r. In this sense when points are uniformly generated in a high-dimensional hypercube, almost all points are much farther than r {\displaystyle r} units away from the center. In high dimensions, the volume of the d-dimensional unit hypercube (with coordinates of the vertices ± 1 {\displaystyle \pm 1} ) is concentrated near a sphere with the radius d / 3 {\displaystyle {\sqrt {d}}/{\sqrt {3}}} for large dimension d. Indeed, for each coordinate x i {\displaystyle x_{i}} the average value of x i 2 {\displaystyle x_{i}^{2}} in the cube is The variance of x i 2 {\displaystyle x_{i}^{2}} for uniform distribution in the cube is Therefore, the squared distance from the origin, r 2 = ∑ i x i 2 {\textstyle r^{2}=\sum _{i}x_{i}^{2}} has the average value d/3 and variance 4d/45. For large d, distribution of r 2 / d {\displaystyle r^{2}/d} is close to the normal distribution with the mean 1/3 and the standard deviation 2 / 45 d {\displaystyle 2/{\sqrt {45d}}} according to the central limit theorem. Thus, when uniformly generating points in high dimensions, both the "middle" of the hypercube, and the corners are empty, and all the volume is concentrated near the surface of a sphere of "intermediate" radius d / 3 {\textstyle {\sqrt {d/3}}} . This also helps to understand the chi-squared distribution. Indeed, the (non-central) chi-squared distribution associated to a random point in the interval [-1, 1] is the same as the distribution of the length-squared of a random point in the d-cube. By the law of large numbers, this distribution concentrates itself in a narrow band around d times the standard deviation squared (σ2) of the original derivation. This illuminates the chi-squared distribution and also illustrates that most of the volume of the d-cube concentrates near the boundary of a sphere of radius σ d {\displaystyle \sigma {\sqrt {d}}} . A further development of this phenomenon is as follows. Any fixed distribution on the real numbers induces a product distribution on points in R d {\displaystyle \mathbb {R} ^{d}} . For any fixed n, it turns out that the difference between the minimum and the maximum distance between a random reference point Q and a list of n random data points P1,...,Pn become indiscernible compared to the minimum distance: This is often cited as distance functions losing their usefulness (for the nearest-neighbor criterion in feature-comparison algorithms, for example) in high dimensions. However, recent research has shown this to only hold in the artificial scenario when the one-dimensional distributions R {\displaystyle \mathbb {R} } are independent and identically distributed. When attributes are correlated, data can become easier and provide higher distance contrast and the signal-to-noise ratio was found to play an important role, thus feature selection should be used. More recently, it has been suggested that there may be a conceptual flaw in the argument that contrast-loss creates a curse in high dimensions. Machine learning can be understood as the problem of assigning instances to their respective generative process of origin, with class labels acting as symbolic representations of individual generative processes. The curse's derivation assumes all instances are independent, identical outcomes of a single high dimensional generative process. If there is only one generative process, there would exist only one (naturally occurring) class and machine learning would be conceptually ill-defined in both high and low dimensions. Thus, the traditional argument that contrast-loss creates a curse, may be fundamentally inappropriate. In addition, it has been shown that when the generative model is modified to accommodate multiple generative processes, contrast-loss can morph from a curse to a blessing, as it ensures that the nearest-neighbor of an instance is almost-surely its most closely related instance. From this perspective, contrast-loss makes high dimensional distances especially meaningful and not especially non-meaningful as is often argued. The effect complicates nearest neighbor search in high dimensional space. It is not possible to quickly reject candidates by using the difference in one coordinate as a lower bound for a distance based on all the dimensions. However, it has recently been observed that the mere number of dimensions does not necessarily result in difficulties, since relevant additional dimensions can also increase the contrast. In addition, for the resulting ranking it remains useful to discern close and far neighbors. Irrelevant ("noise") dimensions, however, reduce the contrast in the manner described above. In time series analysis, where the data are inherently high-dimensional, distance functions also work reliably as long as the signal-to-noise ratio is high enough. Another effect of high dimensionality on distance functions concerns k-nearest neighbor (k-NN) graphs constructed from a data set using a distance function. As the dimension increases, the indegree distribution of the k-NN digraph becomes skewed with a peak on the right because of the emergence of a disproportionate number of hubs, that is, data-points that appear in many more k-NN lists of other data-points than the average. This phenomenon can have a considerable impact on various techniques for classification (including the k-NN classifier), semi-supervised learning, and clustering, and it also affects information retrieval. In a 2012 survey, Zimek et al. identified the following problems when searching for anomalies in high-dimensional data: Many of the analyzed specialized methods tackle one or another of these problems, but there remain many open research questions. Despite the expected "curse of dimensionality" difficulties, common-sense heuristics based on the most straightforward methods "can yield results which are almost surely optimal" for high-dimensional problems. The term "blessing of dimensionality" was introduced in the late 1990s. Donoho in his "Millennium manifesto" explained why he thinks the "blessing of dimensionality" will form a basis of future data mining. The effects of the blessing of dimensionality were discovered in many applications and found their foundation in the concentration of measure phenomena. One example of the blessing of dimensionality phenomenon is linear separability of a random point from a large finite random set with high probability even if this set is exponentially large: the number of elements in this random set can grow exponentially with dimension. Moreover, this linear functional can be selected in the form of the simplest linear Fisher discriminant. This separability theorem was proven for a wide class of probability distributions: general uniformly log-concave distributions, product distributions in a cube and many other families (reviewed recently in ). "The blessing of dimensionality and the curse of dimensionality are two sides of the same coin." For example, the typical property of essentially high-dimensional probability distributions in a high-dimensional space is: the squared distance of random points to a selected point is, with high probability, close to the average (or median) squared distance. This property significantly simplifies the expected geometry of data and indexing of high-dimensional data (blessing), but, at the same time, it makes the similarity search in high dimensions difficult and even useless (curse). Zimek et al. noted that while the typical formalizations of the curse of dimensionality affect i.i.d. data, having data that is separated in each attribute becomes easier even in high dimensions, and argued that the signal-to-noise ratio matters: data becomes easier with each attribute that adds signal, and harder with attributes that only add noise (irrelevant error) to the data. In particular for unsupervised data analysis this effect is known as swamping. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_note-Word_n-gram_language_model_mikolov-16] | [TOKENS: 1793]
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ⁡ ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Word_embedding] | [TOKENS: 1911]
Contents Word embedding In natural language processing, a word embedding is a representation of a word. The embedding is used in text analysis. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning. Word embeddings can be obtained using language modeling and feature learning techniques, where words or phrases from the vocabulary are mapped to vectors of real numbers. Methods to generate this mapping include neural networks, dimensionality reduction on the word co-occurrence matrix, probabilistic models, explainable knowledge base method, and explicit representation in terms of the context in which words appear. Word and phrase embeddings, when used as the underlying input representation, have been shown to boost the performance in NLP tasks such as syntactic parsing and sentiment analysis. Development and history of the approach In distributional semantics, a quantitative methodological approach for understanding meaning in observed language, word embeddings or semantic feature space models have been used as a knowledge representation for some time. Such models aim to quantify and categorize semantic similarities between linguistic items based on their distributional properties in large samples of language data. The underlying idea that "a word is characterized by the company it keeps" was proposed in a 1957 article by John Rupert Firth, but also has roots in the contemporaneous work on search systems and in cognitive psychology. The notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents. The first generation of semantic space models is the vector space model for information retrieval. Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf. curse of dimensionality). Reducing the number of dimensions using linear algebraic methods such as singular value decomposition then led to the introduction of latent semantic analysis in the late 1980s and the random indexing approach for collecting word co-occurrence contexts. In 2000, Bengio et al. provided in a series of papers titled "Neural probabilistic language models" to reduce the high dimensionality of word representations in contexts by "learning a distributed representation for words". A study published in NeurIPS (NIPS) 2002 introduced the use of both word and document embeddings applying the method of kernel CCA to bilingual (and multi-lingual) corpora, also providing an early example of self-supervised learning of word embeddings. Word embeddings come in two different styles, one in which words are expressed as vectors of co-occurring words, and another in which words are expressed as vectors of linguistic contexts in which the words occur; these different styles are studied in Lavelli et al., 2004. Roweis and Saul published in Science how to use "locally linear embedding" (LLE) to discover representations of high dimensional data structures. Most new word embedding techniques after about 2005 rely on a neural network architecture instead of more probabilistic and algebraic models, after foundational work done by Yoshua Bengio[circular reference] and colleagues. The approach has been adopted by many research groups after theoretical advances in 2010 had been made on the quality of vectors and the training speed of the model, as well as after hardware advances allowed for a broader parameter space to be explored profitably. In 2013, a team at Google led by Tomas Mikolov created word2vec, a word embedding toolkit that can train vector space models faster than previous approaches. The word2vec approach has been widely used in experimentation and was instrumental in raising interest for word embeddings as a technology, moving the research strand out of specialised research into broader experimentation and eventually paving the way for practical application. Polysemy and homonymy Historically, one of the main limitations of static word embeddings or word vector space models is that words with multiple meanings are conflated into a single representation (a single vector in the semantic space). In other words, polysemy and homonymy are not handled properly. For example, in the sentence "The club I tried yesterday was great!", it is not clear if the term club is related to the word sense of a club sandwich, clubhouse, golf club, or any other sense that club might have. The necessity to accommodate multiple meanings per word in different vectors (multi-sense embeddings) is the motivation for several contributions in NLP to split single-sense embeddings into multi-sense ones. Most approaches that produce multi-sense embeddings can be divided into two main categories for their word sense representation, i.e., unsupervised and knowledge-based. Based on word2vec skip-gram, Multi-Sense Skip-Gram (MSSG) performs word-sense discrimination and embedding simultaneously, improving its training time, while assuming a specific number of senses for each word. In the Non-Parametric Multi-Sense Skip-Gram (NP-MSSG) this number can vary depending on each word. Combining the prior knowledge of lexical databases (e.g., WordNet, ConceptNet, BabelNet), word embeddings and word sense disambiguation, Most Suitable Sense Annotation (MSSA) labels word-senses through an unsupervised and knowledge-based approach, considering a word's context in a pre-defined sliding window. Once the words are disambiguated, they can be used in a standard word embeddings technique, so multi-sense embeddings are produced. MSSA architecture allows the disambiguation and annotation process to be performed recurrently in a self-improving manner. The use of multi-sense embeddings is known to improve performance in several NLP tasks, such as part-of-speech tagging, semantic relation identification, semantic relatedness, named entity recognition and sentiment analysis. As of the late 2010s, contextually-meaningful embeddings such as ELMo and BERT have been developed. Unlike static word embeddings, these embeddings are at the token-level, in that each occurrence of a word has its own embedding. These embeddings better reflect the multi-sense nature of words, because occurrences of a word in similar contexts are situated in similar regions of BERT's embedding space. For biological sequences: BioVectors Word embeddings for n-grams in biological sequences (e.g. DNA, RNA, and Proteins) for bioinformatics applications have been proposed by Asgari and Mofrad. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of deep learning in proteomics and genomics. The results presented by Asgari and Mofrad suggest that BioVectors can characterize biological sequences in terms of biochemical and biophysical interpretations of the underlying patterns. Game design Word embeddings with applications in game design have been proposed by Rabii and Cook as a way to discover emergent gameplay using logs of gameplay data. The process requires transcribing actions that occur during a game within a formal language and then using the resulting text to create word embeddings. The results presented by Rabii and Cook suggest that the resulting vectors can capture expert knowledge about games like chess that are not explicitly stated in the game's rules. Sentence embeddings The idea has been extended to embeddings of entire sentences or even documents, e.g. in the form of the thought vectors concept. In 2015, some researchers suggested "skip-thought vectors" as a means to improve the quality of machine translation. A more recent and popular approach for representing sentences is Sentence-BERT, or SentenceTransformers, which modifies pre-trained BERT with the use of siamese and triplet network structures. Software Software for training and using word embeddings includes Tomáš Mikolov's Word2vec, Stanford University's GloVe, GN-GloVe, Flair embeddings, AllenNLP's ELMo, BERT, fastText, Gensim, Indra, and Deeplearning4j. Principal Component Analysis (PCA) and T-Distributed Stochastic Neighbour Embedding (t-SNE) are both used to reduce the dimensionality of word vector spaces and visualize word embeddings and clusters. For instance, the fastText is also used to calculate word embeddings for text corpora in Sketch Engine that are available online. Ethical implications Word embeddings may contain the biases and stereotypes contained in the trained dataset, as Bolukbasi et al. points out in the 2016 paper "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings" that a publicly available (and popular) word2vec embedding trained on Google News texts (a commonly used data corpus), which consists of text written by professional journalists, still shows disproportionate word associations reflecting gender and racial biases when extracting word analogies. For example, one of the analogies generated using the aforementioned word embedding is "man is to computer programmer as woman is to homemaker". Research done by Jieyu Zhou et al. shows that the applications of these trained word embeddings without careful oversight likely perpetuates existing bias in society, which is introduced through unaltered training data. Furthermore, word embeddings can even amplify these biases. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Claude_Shannon] | [TOKENS: 5613]
Contents Claude Shannon Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American polymath who was a mathematician, electrical engineer, computer scientist, cryptographer and inventor known as the "father of information theory" and the man who laid the foundations of the Information Age. Shannon was the first to describe the use of Boolean algebra—essential to all digital electronic circuits—and helped found the field of artificial intelligence. The roboticist Rodney Brooks declared Shannon the 20th century engineer who contributed the most to 21st century technologies, and the mathematician Solomon W. Golomb described his intellectual achievement as "one of the greatest of the twentieth century". At the University of Michigan, Shannon dual-degreed, graduating with a Bachelor of Science in electrical engineering and another in mathematics, both in 1936. As a 21-year-old master's degree student in electrical engineering at MIT, his 1937 thesis, "A Symbolic Analysis of Relay and Switching Circuits", demonstrated that electrical applications of Boolean algebra could construct any logical numerical relationship, thereby establishing the theory behind digital computing and digital circuits. Called by some the most important master's thesis of all time, it is the "birth certificate of the digital revolution", and started him in a lifetime of work that led him to win a Kyoto Prize in 1985. He graduated from MIT in 1940 with a PhD in mathematics; his thesis focusing on genetics contained important results, while initially going unpublished. Shannon contributed to the field of cryptanalysis for national defense of the United States during World War II, including his fundamental work on codebreaking and secure telecommunications, writing a paper which is considered one of the foundational pieces of modern cryptography, with his work described as "a turning point, and marked the closure of classical cryptography and the beginning of modern cryptography". His work was foundational for symmetric-key cryptography, including the work of Horst Feistel, the Data Encryption Standard (DES), and the Advanced Encryption Standard (AES). As a result, Shannon has been called the "founding father of modern cryptography". His 1948 paper "A Mathematical Theory of Communication" laid the foundations for the field of information theory, referred to as a "blueprint for the digital era" by electrical engineer Robert G. Gallager and "the Magna Carta of the Information Age" by Scientific American. Golomb compared Shannon's influence on the digital age to that which "the inventor of the alphabet has had on literature". Shannon is also regarded as the most important post-1948 contributor to the theory. Advancements across multiple scientific disciplines utilized Shannon's theory—including the invention of the compact disc, the development of the Internet, the commercialization of mobile telephony, and the understanding of black holes. He formally introduced the term "bit", and was a co-inventor of both pulse-code modulation and the first wearable computer. He also invented the signal-flow graph. Shannon joined the Central Intelligence Agency's Special Cryptologic Advisory Group in 1951. From 1956 to 1978, he was a professor at MIT. He also made numerous contributions to the field of artificial intelligence, including co-organizing the 1956 Dartmouth workshop, considered to be the discipline's founding event, and papers on the programming of chess computers. His Theseus machine was the first electrical device to learn by trial and error, being one of the first examples of artificial intelligence. Biography The Shannon family lived in Gaylord, Michigan, and Claude was born in a hospital in nearby Petoskey. His father, Claude Sr. (1862–1934), was a businessman and, for a while, a judge of probate in Gaylord. His mother, Mabel Wolf Shannon (1880–1945), was a language teacher, who also served as the principal of Gaylord High School. Claude Sr. was a descendant of New Jersey settlers, while Mabel was a child of German immigrants. Shannon's family was active in their Methodist Church during his youth. Most of the first 16 years of Shannon's life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical subjects. His best subjects were science and mathematics. At home, he constructed such devices as models of planes, a radio-controlled model boat and a barbed-wire telegraph system to a friend's house a half-mile away. While growing up, he also worked as a messenger for the Western Union company. Shannon's childhood hero was Thomas Edison, whom he later learned was a distant cousin. Both Shannon and Edison were descendants of John Ogden (1609–1682), a colonial leader and an ancestor of many distinguished people. In 1932, Shannon entered the University of Michigan, where he was introduced to the work of George Boole. He graduated in 1936 with two bachelor's degrees: one in electrical engineering and the other in mathematics. In 1936, Shannon began his graduate studies in electrical engineering at the Massachusetts Institute of Technology (MIT), where he worked on Vannevar Bush's differential analyzer, which was an early analog computer that was composed of electromechanical parts and could solve differential equations. While studying the complicated ad hoc circuits of this analyzer, Shannon designed switching circuits based on Boole's concepts. In 1937, he wrote his master's degree thesis, A Symbolic Analysis of Relay and Switching Circuits, with a paper from this thesis published in 1938. A revolutionary work for switching circuit theory, in it Shannon diagramed switching circuits that could implement the essential operators of Boolean algebra. Then he proved that his switching circuits could be used to simplify the arrangement of the electromechanical relays that were used during that time in telephone call routing switches. Next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter, he presented diagrams of several circuits, including a digital 4-bit full adder. His work differed significantly from the work of previous engineers such as Akira Nakashima, who still relied on the existent circuit theory of the time and took a grounded approach. Shannon's ideas were more abstract and relied on mathematics, thereby breaking new ground with his work, with his approach dominating modern-day electrical engineering. Using electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Shannon's work became the foundation of digital circuit design, as it became widely known in the electrical engineering community during and after World War II. The theoretical rigor of Shannon's work superseded the ad hoc methods that had prevailed previously. In 1987, Howard Gardner hailed Shannon's thesis "possibly the most important, and also the most famous, master's thesis of the century." Herman Goldstine described it in 1972 as "surely ... one of the most important master's theses ever written ... It helped to change digital circuit design from an art to a science." One of the reviewers of his work commented that "To the best of my knowledge, this is the first application of the methods of symbolic logic to so practical an engineering problem. From the point of view of originality I rate the paper as outstanding." Shannon's master's thesis won the 1939 Alfred Noble Prize. Shannon received his PhD in mathematics from MIT in 1940. Vannevar Bush had suggested that Shannon should work on his dissertation at the Cold Spring Harbor Laboratory, in order to develop a mathematical formulation for Mendelian genetics. This research resulted in Shannon's PhD thesis, called An Algebra for Theoretical Genetics. However, the thesis went unpublished after Shannon lost interest, but it did contain important results. Notably, he was one of the first to apply an algebraic framework to study theoretical population genetics. In addition, Shannon devised a general expression for the distribution of several linked traits in a population after multiple generations under a random mating system, which was original at the time, with the new theorem unworked out by other population geneticists of the time. In 1940, Shannon became a National Research Fellow at the Institute for Advanced Study in Princeton, New Jersey. In Princeton, Shannon had the opportunity to discuss his ideas with influential scientists and mathematicians such as Hermann Weyl and John von Neumann, and he also had occasional encounters with Albert Einstein and Kurt Gödel. Shannon worked freely across disciplines, and this ability may have contributed to his later development of mathematical information theory. Shannon had worked at Bell Labs for a few months in the summer of 1937, and returned there to work on fire-control systems and cryptography during World War II, under a contract with section D-2 (Control Systems section) of the National Defense Research Committee (NDRC). Shannon is credited with the invention of signal-flow graphs, in 1942. He discovered the topological gain formula while investigating the functional operation of an analog computer. For two months early in 1943, Shannon came into contact with the leading British mathematician Alan Turing. Turing had been posted to Washington to share with the U.S. Navy's cryptanalytic service the methods used by the Government Code and Cypher School at Bletchley Park to break the cyphers used by the Kriegsmarine U-boats in the north Atlantic Ocean. He was also interested in the encipherment of speech and to this end spent time at Bell Labs. Shannon and Turing met at teatime in the cafeteria. Turing showed Shannon his 1936 paper that defined what is now known as the "universal Turing machine". This impressed Shannon, as many of its ideas complemented his own.[citation needed] Shannon and his team developed anti-aircraft systems that tracked enemy missiles and planes, while also determining the paths for intercepting missiles. In 1945, as the war was coming to an end, the NDRC was issuing a summary of technical reports as a last step prior to its eventual closing down. Inside the volume on fire control, a special essay titled Data Smoothing and Prediction in Fire-Control Systems, coauthored by Shannon, Ralph Beebe Blackman, and Hendrik Wade Bode, formally treated the problem of smoothing the data in fire-control by analogy with "the problem of separating a signal from interfering noise in communications systems." In other words, it modeled the problem in terms of data and signal processing and thus heralded the coming of the Information Age.[citation needed] Shannon's work on cryptography was even more closely related to his later publications on communication theory. At the close of the war, he prepared a classified memorandum for Bell Telephone Labs entitled "A Mathematical Theory of Cryptography", dated September 1945. A declassified version of this paper was published in 1949 as "Communication Theory of Secrecy Systems" in the Bell System Technical Journal. This paper incorporated many of the concepts and mathematical formulations that also appeared in his A Mathematical Theory of Communication. Shannon said that his wartime insights into communication theory and cryptography developed simultaneously, and that "they were so close together you couldn't separate them". In a footnote near the beginning of the classified report, Shannon announced his intention to "develop these results … in a forthcoming memorandum on the transmission of information." While he was at Bell Labs, Shannon proved that the cryptographic one-time pad is unbreakable in his classified research that was later published in 1949. The same article also proved that any unbreakable system must have essentially the same characteristics as the one-time pad: the key must be truly random, as large as the plaintext, never reused in whole or part, and kept secret. In 1948, the promised memorandum appeared as "A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the message a sender wants to transmit. Shannon developed information entropy as a measure of the information content in a message, which is a measure of uncertainty reduced by the message. In so doing, he essentially invented the field of information theory. The book The Mathematical Theory of Communication reprints Shannon's 1948 article and Warren Weaver's popularization of it, which is accessible to the non-specialist. Weaver pointed out that the word "information" in communication theory is not related to what you do say, but to what you could say. That is, information is a measure of one's freedom of choice when one selects a message. Shannon's concepts were also popularized, subject to his own proofreading, in John Robinson Pierce's Symbols, Signals, and Noise. Information theory's fundamental contribution to natural language processing and computational linguistics was further established in 1951, in his article "Prediction and Entropy of Printed English", showing upper and lower bounds of entropy on the statistics of English – giving a statistical foundation to language analysis. In addition, he proved that treating space as the 27th letter of the alphabet actually lowers uncertainty in written language, providing a clear quantifiable link between cultural practice and probabilistic cognition. Another notable paper published in 1949 is "Communication Theory of Secrecy Systems", a declassified version of his wartime work on the mathematical theory of cryptography, in which he proved that all theoretically unbreakable cyphers must have the same requirements as the one-time pad. He is credited with the introduction of sampling theorem, which he had derived as early as 1940, and which is concerned with representing a continuous-time signal from a (uniform) discrete set of samples. This theory was essential in enabling telecommunications to move from analog to digital transmissions systems in the 1960s and later. He further wrote a paper in 1956 regarding coding for a noisy channel, which also became a classic paper in the field of information theory. However, also in 1956 he wrote a one-page editorial for the "IRE Transactions on Information Theory" entitled "The Bandwagon" which he began by observing: "Information theory has, in the last few years, become something of a scientific bandwagon" and which he concluded by warning: "Only by maintaining a thoroughly scientific attitude can we achieve real progress in communication theory and consolidate our present position." Claude Shannon's influence has been immense in the field, for example, in a 1973 collection of the key papers in the field of information theory, he was author or coauthor of 12 of the 49 papers cited, while no one else appeared more than three times. Even beyond his original paper in 1948, he is still regarded as the most important post-1948 contributor to the theory. In May 1951, Mervin Kelly received a request from the director of the CIA, general Walter Bedell Smith, regarding Shannon and the need for him, as Shannon was regarded as, based on "the best authority", the "most eminently qualified scientist in the particular field concerned". As a result of the request, Shannon became part of the CIA's Special Cryptologic Advisory Group or SCAG. In his time at Bell Labs, he also co-developed pulse-code modulation alongside Bernard M. Oliver, and John R. Pierce. In 1950, Shannon designed and built, with the help of his wife, Betty, a learning machine named Theseus. It consisted of a maze on a surface, through which a mechanical mouse could move. Below the surface were sensors (an electromechanical relay circuit) that followed the path of a mechanical mouse through the maze. The mouse was designed to search through the corridors until it found the target. Having travelled through the maze, the mouse could then be placed anywhere it had been before, and because of its prior experience it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory and learning new behavior. After much trial and error, this device would learn the shortest path through the maze, and direct the mechanical mouse through the maze. The pattern of the maze could be changed at will, by rearranging movable partitions. Shannon's mouse appears to have been the first artificial learning device of its kind. Mazin Gilbert stated that Theseus "inspired the whole field of AI. This random trial and error is the foundation of artificial intelligence." Shannon wrote multiple influential papers on artificial intelligence, such as his 1950 paper titled "Programming a Computer for Playing Chess", and his 1953 paper titled "Computers and Automata". Alongside John McCarthy, he co-edited a book titled Automata Studies, which was published in 1956. The categories in the articles within the volume were influenced by Shannon's own subject headings in his 1953 paper. Shannon shared McCarthy's goal of creating a science of intelligent machines, but also held a broader view of viable approaches in automata studies, such as neural nets, Turing machines, cybernetic mechanisms, and symbolic processing by computer. Shannon co-organized and participated in the Dartmouth workshop of 1956, alongside John McCarthy, Marvin Minsky and Nathaniel Rochester, and which is considered the founding event of the field of artificial intelligence. In 1956 Shannon joined the MIT faculty, holding an endowed chair. He worked in the Research Laboratory of Electronics (RLE). He continued to serve on the MIT faculty until 1978. Shannon developed Alzheimer's disease and spent the last few years of his life in a nursing home; he died in 2001, survived by his wife, a son and daughter, and two granddaughters. Outside of Shannon's academic pursuits, he was interested in juggling, unicycling, and chess. He also invented many devices, including a Roman numeral computer called THROBAC, and juggling machines. He built a device that could solve the Rubik's Cube puzzle. Shannon also invented flame-throwing trumpets, rocket-powered frisbees, and plastic foam shoes for navigating a lake, by wearing which it would appear to an observer as if Shannon were walking on water. Shannon designed the Minivac 601, a digital computer trainer to teach business people about how computers functioned. It was sold by the Scientific Development Corp starting in 1961. He is further considered the co-inventor of the first wearable computer along with Edward O. Thorp. The device was used to improve the odds when playing roulette. Shannon married Norma Levor, a wealthy, Jewish, left-wing intellectual in January 1940. The marriage ended in divorce a year later. Levor later married Ben Barzman. Shannon met his second wife, Mary Elizabeth Moore (Betty), when she was a numerical analyst at Bell Labs. They were married in 1949. Betty assisted Claude in building some of his most famous inventions. They had three children. Shannon presented himself as apolitical and an atheist. There are six statues of Shannon sculpted by Eugene Daub: one at the University of Michigan; one at MIT in the Laboratory for Information and Decision Systems; one in Gaylord, Michigan; one at the University of California, San Diego; one at Bell Labs; and another at AT&T Shannon Labs. The statue in Gaylord is located in the Claude Shannon Memorial Park. After the breakup of the Bell System, the part of Bell Labs that remained with AT&T Corporation was named Shannon Labs in his honor. In June 1954, Shannon was listed as one of the top 20 most important scientists in America by Fortune. In 2013, information theory was listed as one of the top 10 revolutionary scientific theories by Science News. According to Neil Sloane, an AT&T Fellow who co-edited Shannon's large collection of papers in 1993, the perspective introduced by Shannon's communication theory (now called "information theory") is the foundation of the digital revolution, and every device containing a microprocessor or microcontroller is a conceptual descendant of Shannon's publication in 1948: "He's one of the great men of the century. Without him, none of the things we know today would exist. The whole digital revolution started with him." The cryptocurrency unit shannon (a synonym for gwei) is named after him. Shannon is credited by many as single-handedly creating information theory and for laying the foundations for the Digital Age. His achievements are considered to be on par with those of Albert Einstein, Sir Isaac Newton, and Charles Darwin. A Mind at Play, a biography of Shannon written by Jimmy Soni and Rob Goodman, was published in 2017. They described Shannon as "the most important genius you’ve never heard of, a man whose intellect was on par with Albert Einstein and Isaac Newton". Consultant and writer Tom Rutledge, writing for Boston Review, stated that "Of the computer pioneers who drove the mid-20th-century information technology revolution—an elite men’s club of scholar-engineers who also helped crack Nazi codes and pinpoint missile trajectories—Shannon may have been the most brilliant of them all." Electrical engineer Robert Gallager stated about Shannon that "He had this amazing clarity of vision. Einstein had it, too – this ability to take on a complicated problem and find the right way to look at it, so that things become very simple." In an obituary by Neil Sloane and Robert Calderbank, they stated that "Shannon must rank near the top of the list of major figures of twentieth century science". Due to his work in multiple fields, Shannon is also regarded as a polymath. Historian James Gleick noted the importance of Shannon, stating that "Einstein looms large, and rightly so. But we’re not living in the relativity age, we’re living in the information age. It’s Shannon whose fingerprints are on every electronic device we own, every computer screen we gaze into, every means of digital communication. He’s one of these people who so transform the world that, after the transformation, the old world is forgotten." Gleick further noted that "he created a whole field from scratch, from the brow of Zeus". On April 30, 2016, Shannon was honored with a Google Doodle to celebrate his life on what would have been his 100th birthday. The Bit Player, a feature film about Shannon directed by Mark Levinson premiered at the World Science Festival in 2019. Drawn from interviews conducted with Shannon in his house in the 1980s, the film was released on Amazon Prime in August 2020. Claude, the large language model developed by artificial intelligence research company Anthropic, is partially named in honor of Shannon. The Mathematical Theory of Communication Shannon's The Mathematical Theory of Communication, begins with an interpretation of his own work by Warren Weaver. Although Shannon's entire work is about communication itself, Warren Weaver communicated his ideas in such a way that those not acclimated to complex theory and mathematics could comprehend the fundamental laws he put forth. The coupling of their unique communicational abilities and ideas generated the Shannon-Weaver model, although the mathematical and theoretical underpinnings emanate entirely from Shannon's work after Weaver's introduction. For the layman, Weaver's introduction better communicates The Mathematical Theory of Communication, but Shannon's subsequent logic, mathematics, and expressive precision was responsible for defining the problem itself. Other work In 1949 Shannon completed a paper (published in March 1950) which estimates the game-tree complexity of chess, which is approximately 10120. This number is now often referred to as the "Shannon number", and is still regarded today as an accurate estimate of the game's complexity. The number is often cited as one of the barriers to solving the game of chess using an exhaustive analysis (i.e. brute force analysis). On March 9, 1949, Shannon presented a paper called "Programming a Computer for playing Chess". The paper was presented at the National Institute for Radio Engineers Convention in New York. He described how to program a computer to play chess based on position scoring and move selection. He proposed basic strategies for restricting the number of possibilities to be considered in a game of chess. In March 1950 it was published in Philosophical Magazine, and is considered one of the first articles published on the topic of programming a computer for playing chess, and using a computer to solve the game. In 1950, Shannon wrote an article titled "A Chess-Playing Machine", which was published in Scientific American. Both papers have had immense influence and laid the foundations for future chess programs. His process for having the computer decide on which move to make was a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual chess piece relative value (1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen). He considered some positional factors, subtracting ½ point for each doubled pawn, backward pawn, and isolated pawn; mobility was incorporated by adding 0.1 point for each legal move available. Shannon formulated a version of Kerckhoffs' principle as "The enemy knows the system". In this form it is known as "Shannon's maxim". Shannon also contributed to combinatorics and detection theory. His 1948 paper introduced many tools used in combinatorics. He did work on detection theory in 1944, with his work being one of the earliest expositions of the “matched filter” principle. He was known as a successful investor who gave lectures on investing. A report from Barron's on August 11, 1986, detailed the recent performance of 1,026 mutual funds, and Shannon achieved a higher return than 1,025 of them. Comparing the portfolio of Shannon from the late 1950s to 1986, to Warren Buffett's of 1965 to 1995, Shannon had a return of about 28% percent, compared to 27% for Buffett. One such method of Shannon's was labeled Shannon's demon, which was to form a portfolio of equal parts cash and a stock, and rebalance regularly to take advantage of the stock's randomly jittering price movements. Shannon reportedly long thought of publishing about investing, but ultimately did not, despite giving multiple lectures. He was one of the first investors to download stock prices, and a snapshot of his portfolio in 1981 was found to be $582,717.50, translating to $1.5 million in 2015, excluding another one of his stocks. Commemorations The Shannon centenary, 2016, marked the life and influence of Claude Elwood Shannon on the hundredth anniversary of his birth on April 30, 1916. It was inspired in part by the Alan Turing Year. An ad hoc committee of the IEEE Information Theory Society including Christina Fragouli, Rüdiger Urbanke, Michelle Effros, Lav Varshney and Sergio Verdú, coordinated worldwide events. The initiative was announced in the History Panel at the 2015 IEEE Information Theory Workshop Jerusalem and the IEEE Information Theory Society newsletter. A detailed listing of confirmed events was available on the website of the IEEE Information Theory Society. Some of the activities included: Awards and honors list The Claude E. Shannon Award was established in his honor; he was also its first recipient, in 1973. Selected works See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Meta-learning_(computer_science)] | [TOKENS: 941]
Contents Meta-learning (computer science) Meta-learning is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn. Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one domain, but not on the next. This poses strong restrictions on the use of machine learning or data mining techniques, since the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood. By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta-learning approaches bear a strong resemblance to the critique of metaheuristic, a possibly related problem. A good analogy to meta-learning, and the inspiration for Jürgen Schmidhuber's early work (1987) and Yoshua Bengio et al.'s work (1991), considers that genetic evolution learns the learning procedure encoded in genes and executed in each individual's brain. In an open-ended hierarchical meta-learning system using genetic programming, better evolutionary methods can be learned by meta evolution, which itself can be improved by meta meta evolution, etc. Definition A proposed definition for a meta-learning system combines three requirements: Bias refers to the assumptions that influence the choice of explanatory hypotheses and not the notion of bias represented in the bias-variance dilemma. Meta-learning is concerned with two aspects of learning bias. Common approaches There are three common approaches: Model-based meta-learning models updates its parameters rapidly with a few training steps, which can be achieved by its internal architecture or controlled by another meta-learner model. A Memory-Augmented Neural Network, or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after only a few examples. Meta Networks (MetaNet) learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. The core idea in metric-based meta-learning is similar to nearest neighbors algorithms, which weight is generated by a kernel function. It aims to learn a metric or distance function over objects. The notion of a good metric is problem-dependent. It should represent the relationship between inputs in the task space and facilitate problem solving. Siamese neural network is composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data sample pairs. The two networks are the same, sharing the same weight and network parameters. Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results. What optimization-based meta-learning algorithms intend for is to adjust the optimization algorithm so that the model can be good at learning with a few examples. LSTM-based meta-learner is to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime. The parametrization allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner (classifier) network that allows for quick convergence of training. Model-Agnostic Meta-Learning (MAML) is a fairly general optimization algorithm, compatible with any model that learns through gradient descent. Reptile is a remarkably simple meta-learning optimization algorithm, given that both of its components rely on meta-optimization through gradient descent and both are model-agnostic. Examples Some approaches which have been viewed as instances of meta-learning: References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Rule-based_machine_learning] | [TOKENS: 365]
Contents Rule-based machine learning Rule-based machine learning (RBML) is a term in computer science intended to encompass any machine learning method that identifies, learns, or evolves 'rules' to store, manipulate or apply. The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. Rule-based machine learning approaches include learning classifier systems, association rule learning, artificial immune systems, and any other method that relies on a set of rules, each covering contextual knowledge. While rule-based machine learning is conceptually a type of rule-based system, it is distinct from traditional rule-based systems, which are often hand-crafted, and other rule-based decision makers. This is because rule-based machine learning applies some form of learning algorithm such as Rough sets theory to identify and minimise the set of features and to automatically identify useful rules, rather than a human needing to apply prior domain knowledge to manually construct rules and curate a rule set. Rules Rules typically take the form of an '{IF:THEN} expression', (e.g. {IF 'condition' THEN 'result'}, or as a more specific example, {IF 'red' AND 'octagon' THEN 'stop-sign}). An individual rule is not in itself a model, since the rule is only applicable when its condition is satisfied. Therefore rule-based machine learning methods typically comprise a set of rules, or knowledge base, that collectively make up the prediction model usually known as decision algorithm. Rules can also be interpreted in various ways depending on the domain knowledge, data types(discrete or continuous) and in combinations. RIPPER Repeated incremental pruning to produce error reduction (RIPPER) is a propositional rule learner proposed by William W. Cohen as an optimized version of IREP. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Online_machine_learning] | [TOKENS: 7055]
Contents Online machine learning In computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time, e.g., prediction of prices in the financial international markets. Online learning algorithms may be prone to catastrophic interference, a problem that can be addressed by incremental learning approaches. Online machine learning algorithms find applications in a wide variety of fields such as sponsored search to maximize ad revenue, portfolio optimization, shortest path prediction (with stochastic weights, e.g. traffic on roads for a maps application), spam filtering, real-time fraud detection, dynamic pricing for e-commerce, etc. There is also growing interest in usage of online learning paradigms for LLMs to enable continuous, real-time adaptation after the initial training. Introduction In the setting of supervised learning, a function of f : X → Y {\displaystyle f:X\to Y} is to be learned, where X {\displaystyle X} is thought of as a space of inputs and Y {\displaystyle Y} as a space of outputs, that predicts well on instances that are drawn from a joint probability distribution p ( x , y ) {\displaystyle p(x,y)} on X × Y {\displaystyle X\times Y} . In reality, the learner never knows the true distribution p ( x , y ) {\displaystyle p(x,y)} over instances. Instead, the learner usually has access to a training set of examples ( x 1 , y 1 ) , … , ( x n , y n ) {\displaystyle (x_{1},y_{1}),\ldots ,(x_{n},y_{n})} . In this setting, the loss function is given as V : Y × Y → R {\displaystyle V:Y\times Y\to \mathbb {R} } , such that V ( f ( x ) , y ) {\displaystyle V(f(x),y)} measures the difference between the predicted value f ( x ) {\displaystyle f(x)} and the true value y {\displaystyle y} . The ideal goal is to select a function f ∈ H {\displaystyle f\in {\mathcal {H}}} , where H {\displaystyle {\mathcal {H}}} is a space of functions called a hypothesis space, so that some notion of total loss is minimized. Depending on the type of model (statistical or adversarial), one can devise different notions of loss, which lead to different learning algorithms. Statistical view of online learning In statistical learning models, the training sample ( x i , y i ) {\displaystyle (x_{i},y_{i})} are assumed to have been drawn from the true distribution p ( x , y ) {\displaystyle p(x,y)} and the objective is to minimize the expected "risk" I [ f ] = E [ V ( f ( x ) , y ) ] = ∫ V ( f ( x ) , y ) d p ( x , y ) . {\displaystyle I[f]=\mathbb {E} [V(f(x),y)]=\int V(f(x),y)\,dp(x,y)\ .} A common paradigm in this situation is to estimate a function f ^ {\displaystyle {\hat {f}}} through empirical risk minimization or regularized empirical risk minimization (usually Tikhonov regularization). The choice of loss function here gives rise to several well-known learning algorithms such as regularized least squares and support vector machines. A purely online model in this category would learn based on just the new input ( x t + 1 , y t + 1 ) {\displaystyle (x_{t+1},y_{t+1})} , the current best predictor f t {\displaystyle f_{t}} and some extra stored information (which is usually expected to have storage requirements independent of training data size). For many formulations, for example nonlinear kernel methods, true online learning is not possible, though a form of hybrid online learning with recursive algorithms can be used where f t + 1 {\displaystyle f_{t+1}} is permitted to depend on f t {\displaystyle f_{t}} and all previous data points ( x 1 , y 1 ) , … , ( x t , y t ) {\displaystyle (x_{1},y_{1}),\ldots ,(x_{t},y_{t})} . In this case, the space requirements are no longer guaranteed to be constant since it requires storing all previous data points, but the solution may take less time to compute with the addition of a new data point, as compared to batch learning techniques. A common strategy to overcome the above issues is to learn using mini-batches, which process a small batch of b ≥ 1 {\displaystyle b\geq 1} data points at a time, this can be considered as pseudo-online learning for b {\displaystyle b} much smaller than the total number of training points. Mini-batch techniques are used with repeated passing over the training data to obtain optimized out-of-core versions of machine learning algorithms, for example, stochastic gradient descent. When combined with backpropagation, this is currently the de facto training method for training artificial neural networks. The simple example of linear least squares is used to explain a variety of ideas in online learning. The ideas are general enough to be applied to other settings, for example, with other convex loss functions. Consider the setting of supervised learning with f {\displaystyle f} being a linear function to be learned: f ( x j ) = ⟨ w , x j ⟩ = w ⋅ x j {\displaystyle f(x_{j})=\langle w,x_{j}\rangle =w\cdot x_{j}} where x j ∈ R d {\displaystyle x_{j}\in \mathbb {R} ^{d}} is a vector of inputs (data points) and w ∈ R d {\displaystyle w\in \mathbb {R} ^{d}} is a linear filter vector. The goal is to compute the filter vector w {\displaystyle w} . To this end, a square loss function V ( f ( x j ) , y j ) = ( f ( x j ) − y j ) 2 = ( ⟨ w , x j ⟩ − y j ) 2 {\displaystyle V(f(x_{j}),y_{j})=(f(x_{j})-y_{j})^{2}=(\langle w,x_{j}\rangle -y_{j})^{2}} is used to compute the vector w {\displaystyle w} that minimizes the empirical loss I n [ w ] = ∑ j = 1 n V ( ⟨ w , x j ⟩ , y j ) = ∑ j = 1 n ( x j T w − y j ) 2 {\displaystyle I_{n}[w]=\sum _{j=1}^{n}V(\langle w,x_{j}\rangle ,y_{j})=\sum _{j=1}^{n}(x_{j}^{\mathsf {T}}w-y_{j})^{2}} where y j ∈ R . {\displaystyle y_{j}\in \mathbb {R} .} Let X {\displaystyle X} be the i × d {\displaystyle i\times d} data matrix and y ∈ R i {\displaystyle y\in \mathbb {R} ^{i}} is the column vector of target values after the arrival of the first i {\displaystyle i} data points. Assuming that the covariance matrix Σ i = X T X {\displaystyle \Sigma _{i}=X^{\mathsf {T}}X} is invertible (otherwise it is preferential to proceed in a similar fashion with Tikhonov regularization), the best solution f ∗ ( x ) = ⟨ w ∗ , x ⟩ {\displaystyle f^{*}(x)=\langle w^{*},x\rangle } to the linear least squares problem is given by w ∗ = ( X T X ) − 1 X T y = Σ i − 1 ∑ j = 1 i x j y j . {\displaystyle w^{*}=(X^{\mathsf {T}}X)^{-1}X^{\mathsf {T}}y=\Sigma _{i}^{-1}\sum _{j=1}^{i}x_{j}y_{j}.} Now, calculating the covariance matrix Σ i = ∑ j = 1 i x j x j T {\displaystyle \Sigma _{i}=\sum _{j=1}^{i}x_{j}x_{j}^{\mathsf {T}}} takes time O ( i d 2 ) {\displaystyle O(id^{2})} , inverting the d × d {\displaystyle d\times d} matrix takes time O ( d 3 ) {\displaystyle O(d^{3})} , while the rest of the multiplication takes time O ( d 2 ) {\displaystyle O(d^{2})} , giving a total time of O ( i d 2 + d 3 ) {\displaystyle O(id^{2}+d^{3})} . When there are n {\displaystyle n} total points in the dataset, to recompute the solution after the arrival of every datapoint i = 1 , … , n {\displaystyle i=1,\ldots ,n} , the naive approach will have a total complexity O ( n 2 d 2 + n d 3 ) {\displaystyle O(n^{2}d^{2}+nd^{3})} . Note that when storing the matrix Σ i {\displaystyle \Sigma _{i}} , then updating it at each step needs only adding x i + 1 x i + 1 T {\displaystyle x_{i+1}x_{i+1}^{\mathsf {T}}} , which takes O ( d 2 ) {\displaystyle O(d^{2})} time, reducing the total time to O ( n d 2 + n d 3 ) = O ( n d 3 ) {\displaystyle O(nd^{2}+nd^{3})=O(nd^{3})} , but with an additional storage space of O ( d 2 ) {\displaystyle O(d^{2})} to store Σ i {\displaystyle \Sigma _{i}} . The recursive least squares (RLS) algorithm considers an online approach to the least squares problem. It can be shown that by initialising w 0 = 0 ∈ R d {\displaystyle \textstyle w_{0}=0\in \mathbb {R} ^{d}} and Γ 0 = I ∈ R d × d {\displaystyle \textstyle \Gamma _{0}=I\in \mathbb {R} ^{d\times d}} , the solution of the linear least squares problem given in the previous section can be computed by the following iteration: Γ i = Γ i − 1 − Γ i − 1 x i x i T Γ i − 1 1 + x i T Γ i − 1 x i {\displaystyle \Gamma _{i}=\Gamma _{i-1}-{\frac {\Gamma _{i-1}x_{i}x_{i}^{\mathsf {T}}\Gamma _{i-1}}{1+x_{i}^{\mathsf {T}}\Gamma _{i-1}x_{i}}}} w i = w i − 1 − Γ i x i ( x i T w i − 1 − y i ) {\displaystyle w_{i}=w_{i-1}-\Gamma _{i}x_{i}\left(x_{i}^{\mathsf {T}}w_{i-1}-y_{i}\right)} The above iteration algorithm can be proved using induction on i {\displaystyle i} . The proof also shows that Γ i = Σ i − 1 {\displaystyle \Gamma _{i}=\Sigma _{i}^{-1}} . One can look at RLS also in the context of adaptive filters (see RLS). The complexity for n {\displaystyle n} steps of this algorithm is O ( n d 2 ) {\displaystyle O(nd^{2})} , which is an order of magnitude faster than the corresponding batch learning complexity. The storage requirements at every step i {\displaystyle i} here are to store the matrix Γ i {\displaystyle \Gamma _{i}} , which is constant at O ( d 2 ) {\displaystyle O(d^{2})} . For the case when Σ i {\displaystyle \Sigma _{i}} is not invertible, consider the regularised version of the problem loss function ∑ j = 1 n ( x j T w − y j ) 2 + λ ‖ w ‖ 2 2 {\displaystyle \sum _{j=1}^{n}\left(x_{j}^{\mathsf {T}}w-y_{j}\right)^{2}+\lambda \left\|w\right\|_{2}^{2}} . Then, it's easy to show that the same algorithm works with Γ 0 = ( I + λ I ) − 1 {\displaystyle \Gamma _{0}=(I+\lambda I)^{-1}} , and the iterations proceed to give Γ i = ( Σ i + λ I ) − 1 {\displaystyle \Gamma _{i}=(\Sigma _{i}+\lambda I)^{-1}} . When this w i = w i − 1 − Γ i x i ( x i T w i − 1 − y i ) {\displaystyle w_{i}=w_{i-1}-\Gamma _{i}x_{i}\left(x_{i}^{\mathsf {T}}w_{i-1}-y_{i}\right)} is replaced by w i = w i − 1 − γ i x i ( x i T w i − 1 − y i ) = w i − 1 − γ i ∇ V ( ⟨ w i − 1 , x i ⟩ , y i ) {\displaystyle w_{i}=w_{i-1}-\gamma _{i}x_{i}\left(x_{i}^{\mathsf {T}}w_{i-1}-y_{i}\right)=w_{i-1}-\gamma _{i}\nabla V(\langle w_{i-1},x_{i}\rangle ,y_{i})} or Γ i ∈ R d × d {\displaystyle \Gamma _{i}\in \mathbb {R} ^{d\times d}} by γ i ∈ R {\displaystyle \gamma _{i}\in \mathbb {R} } , this becomes the stochastic gradient descent algorithm. In this case, the complexity for n {\displaystyle n} steps of this algorithm reduces to O ( n d ) {\displaystyle O(nd)} . The storage requirements at every step i {\displaystyle i} are constant at O ( d ) {\displaystyle O(d)} . However, the stepsize γ i {\displaystyle \gamma _{i}} needs to be chosen carefully to solve the expected risk minimization problem, as detailed above. By choosing a decaying step size γ i ≈ 1 i , {\displaystyle \gamma _{i}\approx {\frac {1}{\sqrt {i}}},} one can prove the convergence of the average iterate w ¯ n = 1 n ∑ i = 1 n w i {\textstyle {\overline {w}}_{n}={\frac {1}{n}}\sum _{i=1}^{n}w_{i}} . This setting is a special case of stochastic optimization, a well known problem in optimization. In practice, one can perform multiple stochastic gradient passes (also called cycles or epochs) over the data. The algorithm thus obtained is called incremental gradient method and corresponds to an iteration w i = w i − 1 − γ i ∇ V ( ⟨ w i − 1 , x t i ⟩ , y t i ) {\displaystyle w_{i}=w_{i-1}-\gamma _{i}\nabla V(\langle w_{i-1},x_{t_{i}}\rangle ,y_{t_{i}})} The main difference with the stochastic gradient method is that here a sequence t i {\displaystyle t_{i}} is chosen to decide which training point is visited in the i {\displaystyle i} -th step. Such a sequence can be stochastic or deterministic. The number of iterations is then decoupled to the number of points (each point can be considered more than once). The incremental gradient method can be shown to provide a minimizer to the empirical risk. Incremental techniques can be advantageous when considering objective functions made up of a sum of many terms e.g. an empirical error corresponding to a very large dataset. Kernels can be used to extend the above algorithms to non-parametric models (or models where the parameters form an infinite dimensional space). The corresponding procedure will no longer be truly online and instead involve storing all the data points, but is still faster than the brute force method. This discussion is restricted to the case of the square loss, though it can be extended to any convex loss. It can be shown by an easy induction that if X i {\displaystyle X_{i}} is the data matrix and w i {\displaystyle w_{i}} is the output after i {\displaystyle i} steps of the SGD algorithm, then, w i = X i T c i {\displaystyle w_{i}=X_{i}^{\mathsf {T}}c_{i}} where c i = ( ( c i ) 1 , ( c i ) 2 , . . . , ( c i ) i ) ∈ R i {\displaystyle c_{i}=((c_{i})_{1},(c_{i})_{2},...,(c_{i})_{i})\in \mathbb {R} ^{i}} and the sequence c i {\displaystyle c_{i}} satisfies the recursion: c 0 = 0 {\displaystyle c_{0}=0} ( c i ) j = ( c i − 1 ) j , j = 1 , 2 , . . . , i − 1 {\displaystyle (c_{i})_{j}=(c_{i-1})_{j},j=1,2,...,i-1} and ( c i ) i = γ i ( y i − ∑ j = 1 i − 1 ( c i − 1 ) j ⟨ x j , x i ⟩ ) {\displaystyle (c_{i})_{i}=\gamma _{i}{\Big (}y_{i}-\sum _{j=1}^{i-1}(c_{i-1})_{j}\langle x_{j},x_{i}\rangle {\Big )}} Notice that here ⟨ x j , x i ⟩ {\displaystyle \langle x_{j},x_{i}\rangle } is just the standard Kernel on R d {\displaystyle \mathbb {R} ^{d}} , and the predictor is of the form f i ( x ) = ⟨ w i − 1 , x ⟩ = ∑ j = 1 i − 1 ( c i − 1 ) j ⟨ x j , x ⟩ . {\displaystyle f_{i}(x)=\langle w_{i-1},x\rangle =\sum _{j=1}^{i-1}(c_{i-1})_{j}\langle x_{j},x\rangle .} Now, if a general kernel K {\displaystyle K} is introduced instead and let the predictor be f i ( x ) = ∑ j = 1 i − 1 ( c i − 1 ) j K ( x j , x ) {\displaystyle f_{i}(x)=\sum _{j=1}^{i-1}(c_{i-1})_{j}K(x_{j},x)} then the same proof will also show that predictor minimising the least squares loss is obtained by changing the above recursion to ( c i ) i = γ i ( y i − ∑ j = 1 i − 1 ( c i − 1 ) j K ( x j , x i ) ) {\displaystyle (c_{i})_{i}=\gamma _{i}{\Big (}y_{i}-\sum _{j=1}^{i-1}(c_{i-1})_{j}K(x_{j},x_{i}){\Big )}} The above expression requires storing all the data for updating c i {\displaystyle c_{i}} . The total time complexity for the recursion when evaluating for the n {\displaystyle n} -th datapoint is O ( n 2 d k ) {\displaystyle O(n^{2}dk)} , where k {\displaystyle k} is the cost of evaluating the kernel on a single pair of points. Thus, the use of the kernel has allowed the movement from a finite dimensional parameter space w i ∈ R d {\displaystyle \textstyle w_{i}\in \mathbb {R} ^{d}} to a possibly infinite dimensional feature represented by a kernel K {\displaystyle K} by instead performing the recursion on the space of parameters c i ∈ R i {\displaystyle \textstyle c_{i}\in \mathbb {R} ^{i}} , whose dimension is the same as the size of the training dataset. In general, this is a consequence of the representer theorem. Online convex optimization (OCO) is a general framework for decision making which leverages convex optimization to allow for efficient algorithms. The framework is that of repeated game playing as follows: For t = 1 , 2 , . . . , T {\displaystyle t=1,2,...,T} The goal is to minimize regret, or the difference between cumulative loss and the loss of the best fixed point u ∈ S {\displaystyle u\in S} in hindsight. As an example, consider the case of online least squares linear regression. Here, the weight vectors come from the convex set S = R d {\displaystyle S=\mathbb {R} ^{d}} , and nature sends back the convex loss function v t ( w ) = ( ⟨ w , x t ⟩ − y t ) 2 {\displaystyle v_{t}(w)=(\langle w,x_{t}\rangle -y_{t})^{2}} . Note here that y t {\displaystyle y_{t}} is implicitly sent with v t {\displaystyle v_{t}} . Some online prediction problems however cannot fit in the framework of OCO. For example, in online classification, the prediction domain and the loss functions are not convex. In such scenarios, two simple techniques for convexification are used: randomisation and surrogate loss functions.[citation needed] Some simple online convex optimisation algorithms are: The simplest learning rule to try is to select (at the current step) the hypothesis that has the least loss over all past rounds. This algorithm is called Follow the leader, and round t {\displaystyle t} is simply given by: w t = a r g m i n w ∈ S ⁡ ∑ i = 1 t − 1 v i ( w ) {\displaystyle w_{t}=\mathop {\operatorname {arg\,min} } _{w\in S}\sum _{i=1}^{t-1}v_{i}(w)} This method can thus be looked as a greedy algorithm. For the case of online quadratic optimization (where the loss function is v t ( w ) = ‖ w − x t ‖ 2 2 {\displaystyle v_{t}(w)=\left\|w-x_{t}\right\|_{2}^{2}} ), one can show a regret bound that grows as log ⁡ ( T ) {\displaystyle \log(T)} . However, similar bounds cannot be obtained for the FTL algorithm for other important families of models like online linear optimization. To do so, one modifies FTL by adding regularisation. This is a natural modification of FTL that is used to stabilise the FTL solutions and obtain better regret bounds. A regularisation function R : S → R {\displaystyle R:S\to \mathbb {R} } is chosen and learning performed in round t as follows: w t = a r g m i n w ∈ S ⁡ ∑ i = 1 t − 1 v i ( w ) + R ( w ) {\displaystyle w_{t}=\mathop {\operatorname {arg\,min} } _{w\in S}\sum _{i=1}^{t-1}v_{i}(w)+R(w)} As a special example, consider the case of online linear optimisation i.e. where nature sends back loss functions of the form v t ( w ) = ⟨ w , z t ⟩ {\displaystyle v_{t}(w)=\langle w,z_{t}\rangle } . Also, let S = R d {\displaystyle S=\mathbb {R} ^{d}} . Suppose the regularisation function R ( w ) = 1 2 η ‖ w ‖ 2 2 {\textstyle R(w)={\frac {1}{2\eta }}\left\|w\right\|_{2}^{2}} is chosen for some positive number η {\displaystyle \eta } . Then, one can show that the regret minimising iteration becomes w t + 1 = − η ∑ i = 1 t z i = w t − η z t {\displaystyle w_{t+1}=-\eta \sum _{i=1}^{t}z_{i}=w_{t}-\eta z_{t}} Note that this can be rewritten as w t + 1 = w t − η ∇ v t ( w t ) {\displaystyle w_{t+1}=w_{t}-\eta \nabla v_{t}(w_{t})} , which looks exactly like online gradient descent. If S is instead some convex subspace of R d {\displaystyle \mathbb {R} ^{d}} , S would need to be projected onto, leading to the modified update rule w t + 1 = Π S ( − η ∑ i = 1 t z i ) = Π S ( η θ t + 1 ) {\displaystyle w_{t+1}=\Pi _{S}(-\eta \sum _{i=1}^{t}z_{i})=\Pi _{S}(\eta \theta _{t+1})} This algorithm is known as lazy projection, as the vector θ t + 1 {\displaystyle \theta _{t+1}} accumulates the gradients. It is also known as Nesterov's dual averaging algorithm. In this scenario of linear loss functions and quadratic regularisation, the regret is bounded by O ( T ) {\displaystyle O({\sqrt {T}})} , and thus the average regret goes to 0 as desired. The above proved a regret bound for linear loss functions v t ( w ) = ⟨ w , z t ⟩ {\displaystyle v_{t}(w)=\langle w,z_{t}\rangle } . To generalise the algorithm to any convex loss function, the subgradient ∂ v t ( w t ) {\displaystyle \partial v_{t}(w_{t})} of v t {\displaystyle v_{t}} is used as a linear approximation to v t {\displaystyle v_{t}} near w t {\displaystyle w_{t}} , leading to the online subgradient descent algorithm: Initialise parameter η , w 1 = 0 {\displaystyle \eta ,w_{1}=0} For t = 1 , 2 , . . . , T {\displaystyle t=1,2,...,T} One can use the OSD algorithm to derive O ( T ) {\displaystyle O({\sqrt {T}})} regret bounds for the online version of SVM's for classification, which use the hinge loss v t ( w ) = max { 0 , 1 − y t ( w ⋅ x t ) } {\displaystyle v_{t}(w)=\max\{0,1-y_{t}(w\cdot x_{t})\}} Quadratically regularised FTRL algorithms lead to lazily projected gradient algorithms as described above. To use the above for arbitrary convex functions and regularisers, one uses online mirror descent. The optimal regularization in hindsight can be derived for linear loss functions, this leads to the AdaGrad algorithm. For the Euclidean regularisation, one can show a regret bound of O ( T ) {\displaystyle O({\sqrt {T}})} , which can be improved further to a O ( log ⁡ T ) {\displaystyle O(\log T)} for strongly convex and exp-concave loss functions. Continual learning Continual learning means constantly improving the learned model by processing continuous streams of information. Continual learning capabilities are essential for software systems and autonomous agents interacting in an ever changing real world. However, continual learning is a challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting. Interpretations of online learning The paradigm of online learning has different interpretations depending on the choice of the learning model, each of which has distinct implications about the predictive quality of the sequence of functions f 1 , f 2 , … , f n {\displaystyle f_{1},f_{2},\ldots ,f_{n}} . The prototypical stochastic gradient descent algorithm is used for this discussion. As noted above, its recursion is given by w t = w t − 1 − γ t ∇ V ( ⟨ w t − 1 , x t ⟩ , y t ) {\displaystyle w_{t}=w_{t-1}-\gamma _{t}\nabla V(\langle w_{t-1},x_{t}\rangle ,y_{t})} The first interpretation consider the stochastic gradient descent method as applied to the problem of minimizing the expected risk I [ w ] {\displaystyle I[w]} defined above. Indeed, in the case of an infinite stream of data, since the examples ( x 1 , y 1 ) , ( x 2 , y 2 ) , … {\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),\ldots } are assumed to be drawn i.i.d. from the distribution p ( x , y ) {\displaystyle p(x,y)} , the sequence of gradients of V ( ⋅ , ⋅ ) {\displaystyle V(\cdot ,\cdot )} in the above iteration are an i.i.d. sample of stochastic estimates of the gradient of the expected risk I [ w ] {\displaystyle I[w]} and therefore one can apply complexity results for the stochastic gradient descent method to bound the deviation I [ w t ] − I [ w ∗ ] {\displaystyle I[w_{t}]-I[w^{\ast }]} , where w ∗ {\displaystyle w^{\ast }} is the minimizer of I [ w ] {\displaystyle I[w]} . This interpretation is also valid in the case of a finite training set; although with multiple passes through the data the gradients are no longer independent, still complexity results can be obtained in special cases. The second interpretation applies to the case of a finite training set and considers the SGD algorithm as an instance of incremental gradient descent method. In this case, one instead looks at the empirical risk: I n [ w ] = 1 n ∑ i = 1 n V ( ⟨ w , x i ⟩ , y i ) . {\displaystyle I_{n}[w]={\frac {1}{n}}\sum _{i=1}^{n}V(\langle w,x_{i}\rangle ,y_{i})\ .} Since the gradients of V ( ⋅ , ⋅ ) {\displaystyle V(\cdot ,\cdot )} in the incremental gradient descent iterations are also stochastic estimates of the gradient of I n [ w ] {\displaystyle I_{n}[w]} , this interpretation is also related to the stochastic gradient descent method, but applied to minimize the empirical risk as opposed to the expected risk. Since this interpretation concerns the empirical risk and not the expected risk, multiple passes through the data are readily allowed and actually lead to tighter bounds on the deviations I n [ w t ] − I n [ w n ∗ ] {\displaystyle I_{n}[w_{t}]-I_{n}[w_{n}^{\ast }]} , where w n ∗ {\displaystyle w_{n}^{\ast }} is the minimizer of I n [ w ] {\displaystyle I_{n}[w]} . Implementations See also Learning paradigms General algorithms Learning models References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Quantum_machine_learning] | [TOKENS: 6226]
Contents Quantum machine learning Quantum machine learning (QML) is the study of quantum algorithms for machine learning. It often refers to quantum algorithms for machine learning tasks which analyze classical data, sometimes called quantum-enhanced machine learning. QML algorithms use qubits and quantum operations to try to improve the space and time complexity of classical machine learning algorithms. Hybrid QML methods involve both classical and quantum processing, where computationally difficult subroutines are outsourced to a quantum device. These routines can be more complex in nature and executed faster on a quantum computer. Furthermore, quantum algorithms can be used to analyze quantum states instead of classical data. The term "quantum machine learning" is sometimes use to refer classical machine learning methods applied to data generated from quantum experiments (i.e. machine learning of quantum systems), such as learning the phase transitions of a quantum system or creating new quantum experiments. QML also extends to a branch of research that explores methodological and structural similarities between certain physical systems and learning systems, in particular neural networks. For example, some mathematical and numerical techniques from quantum physics are applicable to classical deep learning and vice versa. Furthermore, researchers investigate more abstract notions of learning theory with respect to quantum information, sometimes referred to as "quantum learning theory". Machine learning with quantum computers Quantum-enhanced machine learning refers to quantum algorithms that solve tasks in machine learning, thereby improving and often expediting classical machine learning techniques. Such algorithms typically require one to encode the given classical data set into a quantum computer to make it accessible for quantum information processing. Subsequently, quantum information processing routines are applied and the result of the quantum computation is read out by measuring the quantum system. For example, the outcome of the measurement of a qubit reveals the result of a binary classification task. While many proposals of QML algorithms are still purely theoretical and require a full-scale universal quantum computer to be tested, others have been implemented on small-scale or special purpose quantum devices. Early work on quantum associative memories has been done by Dan Ventura and Tony Martinez and by Carlo A. Trugenberger in the late 1990s and early 2000s. Associative (or content-addressable) memories are able to recognize stored content on the basis of a similarity measure, while random access memories are accessed by the address of stored information and not its content. As such they must be able to retrieve both incomplete and corrupted patterns, the essential machine learning task of pattern recognition. Typical classical associative memories store p patterns in the O ( n 2 ) {\displaystyle O(n^{2})} interactions (synapses) of a real, symmetric energy matrix over a network of n artificial neurons. The encoding is such that the desired patterns are local minima of the energy functional and retrieval is done by minimizing the total energy, starting from an initial configuration. Unfortunately, classical associative memories are severely limited by the phenomenon of cross-talk. When too many patterns are stored, spurious memories appear which quickly proliferate, so that the energy landscape becomes disordered and no retrieval is anymore possible. The number of storable patterns is typically limited by a linear function of the number of neurons, p ≤ O ( n ) {\displaystyle p\leq O(n)} . Quantum associative memories (in their simplest realization) store patterns in a unitary matrix U acting on the Hilbert space of n qubits. Retrieval is realized by the unitary evolution of a fixed initial state to a quantum superposition of the desired patterns with probability distribution peaked on the most similar pattern to an input. By its very quantum nature, the retrieval process is thus probabilistic. Because quantum associative memories are free from cross-talk, however, spurious memories are never generated. Correspondingly, they have a superior capacity than classical ones. The number of parameters in the unitary matrix U is O ( p n ) {\displaystyle O(pn)} . One can thus have efficient, spurious-memory-free quantum associative memories for any polynomial number of patterns. If the matrix U is encoded as a unique operator (as opposed as to a sequence of gates as in the circuit model), e.g. by an optical interferometer, the retrieval becomes efficient even for an exponential number of patterns. A number of quantum algorithms for machine learning are based on the idea of amplitude encoding, that is, to associate the amplitudes of a quantum state with the inputs and outputs of computations. Since a state of n {\displaystyle n} qubits is described by 2 n {\displaystyle 2^{n}} complex amplitudes, this information encoding can allow for an exponentially compact representation. Intuitively, this corresponds to associating a discrete probability distribution over binary random variables with a classical vector. The goal of algorithms based on amplitude encoding is to formulate quantum algorithms whose resources grow polynomially in the number of qubits n {\displaystyle n} , which amounts to a logarithmic time complexity in the number of amplitudes and thereby the dimension of the input. Many QML algorithms in this category are based on variations of the quantum algorithm for linear systems of equations (colloquially called HHL, after the paper's authors) which, under specific conditions, performs a matrix inversion using an amount of physical resources growing only logarithmically in the dimensions of the matrix. One of these conditions is that a Hamiltonian which entry wise corresponds to the matrix can be simulated efficiently, which is known to be possible if the matrix is sparse or low rank. For reference, any known classical algorithm for matrix inversion requires a number of operations that grows more than quadratically in the dimension of the matrix (e.g. O ( n 2.373 ) {\displaystyle O{\mathord {\left(n^{2.373}\right)}}} ), but they are not restricted to sparse matrices. Quantum matrix inversion can be applied to machine learning methods in which the training reduces to solving a linear system of equations, for example in least-squares linear regression, the least-squares version of support vector machines, and Gaussian processes. A crucial bottleneck of methods that simulate linear algebra computations with the amplitudes of quantum states is state preparation, which often requires one to initialise a quantum system in a state whose amplitudes reflect the features of the entire dataset. Although efficient methods for state preparation are known for specific cases, this step easily hides the complexity of the task. In a variational quantum algorithm, a classical computer optimizes the parameters used to prepare a quantum state, while a quantum computer is used to do the actual state preparation and measurement. VQAs are considered promising candidates for noisy intermediate-scale quantum computers. Variational quantum circuits (or parameterized quantum circuits) are a popular class of VQAs where the parameters are those used in a fixed quantum circuit. Researchers have studied VQCs to solve optimization problems and find the ground state energy of complex quantum systems, which were difficult to solve using a classical computer. Pattern reorganization is one of the important tasks of machine learning, binary classification is one of the tools or algorithms to find patterns. Binary classification is used in supervised learning and in unsupervised learning. In QML, classical bits are converted to qubits and they are mapped to Hilbert space; complex value data are used in a quantum binary classifier to use the advantage of Hilbert space. By exploiting the quantum mechanic properties such as superposition, entanglement, interference the quantum binary classifier produces the accurate result in short period of time. Another approach to improving classical machine learning with quantum information processing uses amplitude amplification methods based on Grover's search algorithm, which has been shown to solve unstructured search problems with a quadratic speedup compared to classical algorithms. These quantum routines can be employed for learning algorithms that translate into an unstructured search task, as can be done, for instance, in the case of the k-medians and the k-nearest neighbors algorithms. Other applications include quadratic speedups in the training of perceptrons. An example of amplitude amplification being used in a machine learning algorithm is Grover's search algorithm minimization. In which a subroutine uses Grover's search algorithm to find an element less than some previously defined element. This can be done with an oracle that determines whether or not a state with a corresponding element is less than the predefined one. Grover's algorithm can then find an element such that our condition is met. The minimization is initialized by some random element in our data set, and iteratively does this subroutine to find the minimum element in the data set. This minimization is notably used in quantum k-medians, and it has a speed up of at least O ( n k ) {\displaystyle {\mathcal {O}}\left({\sqrt {\frac {n}{k}}}\right)} compared to classical versions of k-medians, where n {\displaystyle n} is the number of data points and k {\displaystyle k} is the number of clusters. Amplitude amplification is often combined with quantum walks to achieve the same quadratic speedup. Quantum walks have been proposed to enhance Google's PageRank algorithm as well as the performance of reinforcement learning agents in the projective simulation framework. In quantum-enhanced reinforcement learning, a quantum agent interacts with a classical or quantum environment and occasionally receives rewards for its actions. In some situations, either because of the quantum processing capability of the agent, or due to the possibility to probe the environment in superpositions, a quantum speedup may be achieved. Implementations of these kinds of protocols have been proposed for systems of trapped ions and superconducting circuits. A quantum speedup of the agent's internal decision-making time has been experimentally demonstrated in trapped ions, while a quantum speedup of the learning time in a fully coherent (`quantum') interaction between agent and environment has been experimentally realized in a photonic setup. Quantum annealing is an optimization technique used to determine the local minima and maxima of a function over a given set of candidate functions. This is a method of discretizing a function with many local minima or maxima in order to determine the observables of the function. The process can be distinguished from Simulated annealing by the Quantum tunneling process, by which particles tunnel through kinetic or potential barriers from a high state to a low state. Quantum annealing starts from a superposition of all possible states of a system, weighted equally. Then the time-dependent Schrödinger equation guides the time evolution of the system, serving to affect the amplitude of each state as time increases. Eventually, the ground state can be reached to yield the instantaneous Hamiltonian of the system. Sampling from high-dimensional probability distributions is at the core of a wide spectrum of computational techniques with important applications across science, engineering, and society. Examples include deep learning, probabilistic programming, and other machine learning and artificial intelligence applications. A computationally hard problem, which is key for some relevant machine learning tasks, is the estimation of averages over probabilistic models defined in terms of a Boltzmann distribution. Sampling from generic probabilistic models is hard: algorithms relying heavily on sampling are expected to remain intractable no matter how large and powerful classical computing resources become. Even though quantum annealers, like those produced by D-Wave Systems, were designed for challenging combinatorial optimization problems, it has been recently recognized as a potential candidate to speed up computations that rely on sampling by exploiting quantum effects. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks. The standard approach to training Boltzmann machines relies on the computation of certain averages that can be estimated by standard sampling techniques, such as Markov chain Monte Carlo algorithms. Another possibility is to rely on a physical process, like quantum annealing, that naturally generates samples from a Boltzmann distribution. The objective is to find the optimal control parameters that best represent the empirical distribution of a given dataset. The D-Wave 2X system hosted at NASA Ames Research Center has been recently used for the learning of a special class of restricted Boltzmann machines that can serve as a building block for deep learning architectures. Complementary work that appeared roughly simultaneously showed that quantum annealing can be used for supervised learning in classification tasks. The same device was later used to train a fully connected Boltzmann machine to generate, reconstruct, and classify down-scaled, low-resolution handwritten digits, among other synthetic datasets. In both cases, the models trained by quantum annealing had a similar or better performance in terms of quality. The ultimate question that drives this endeavour is whether there is quantum speedup in sampling applications. Experience with the use of quantum annealers for combinatorial optimization suggests the answer is not straightforward. Reverse annealing has been used as well to solve a fully connected quantum restricted Boltzmann machine. Inspired by the success of Boltzmann machines based on classical Boltzmann distribution, a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Hamiltonian was recently proposed. Due to the non-commutative nature of quantum mechanics, the training process of the quantum Boltzmann machine can become nontrivial. This problem was, to some extent, circumvented by introducing bounds on the quantum probabilities, allowing the authors to train the model efficiently by sampling. It is possible that a specific type of quantum Boltzmann machine has been trained in the D-Wave 2X by using a learning rule analogous to that of classical Boltzmann machines. Quantum annealing is not the only technology for sampling. In a prepare-and-measure scenario, a universal quantum computer prepares a thermal state, which is then sampled by measurements. This can reduce the time required to train a deep restricted Boltzmann machine, and provide a richer and more comprehensive framework for deep learning than classical computing. The same quantum methods also permit efficient training of full Boltzmann machines and multi-layer, fully connected models and do not have well-known classical counterparts. Relying on an efficient thermal state preparation protocol starting from an arbitrary state, quantum-enhanced Markov logic networks exploit the symmetries and the locality structure of the probabilistic graphical model generated by a first-order logic template. This provides an exponential reduction in computational complexity in probabilistic inference, and, while the protocol relies on a universal quantum computer, under mild assumptions it can be embedded on contemporary quantum annealing hardware. Quantum analogues or generalizations of classical neural nets are often referred to as quantum neural networks. The term is claimed by a wide range of approaches, including the implementation and extension of neural networks using photons, layered variational circuits or quantum Ising-type models. A novel design for multi-dimensional vectors that uses circuits as convolution filters is QCNN. It was inspired by the advantages of CNNs and the power of QML. It is made using a combination of a variational quantum circuit (VQC) and a deep neural network(DNN), fully utilizing the power of extremely parallel processing on a superposition of a quantum state with a finite number of qubits. The main strategy is to carry out an iterative optimization process in the NISQ devices, without the negative impact of noise, which is possibly incorporated into the circuit parameter, and without the need for quantum error correction. The quantum circuit must effectively handle spatial information in order for QCNN to function as CNN. The convolution filter is the most basic technique for making use of spatial information. One or more quantum convolutional filters make up a quantum convolutional neural network (QCNN), and each of these filters transforms input data using a quantum circuit that can be created in an organized or randomized way. Three parts that make up the quantum convolutional filter are: the encoder, the parameterized quantum circuit (PQC), and the measurement. The quantum convolutional filter can be seen as an extension of the filter in the traditional CNN because it was designed with trainable parameters. Quantum neural networks take advantage of the hierarchical structures, and for each subsequent layer, the number of qubits from the preceding layer is decreased by a factor of two. For n input qubits, these structure have O(log(n)) layers, allowing for shallow circuit depth. Additionally, they are able to avoid "barren plateau," one of the most significant issues with PQC-based algorithms, ensuring trainability. Despite the fact that the QCNN model does not include the corresponding quantum operation, the fundamental idea of the pooling layer is also offered to assure validity. In QCNN architecture, the pooling layer is typically placed between succeeding convolutional layers. Its function is to shrink the representation's spatial size while preserving crucial features, which allows it to reduce the number of parameters, streamline network computing, and manage over-fitting. Such process can be accomplished applying full Tomography on the state to reduce it all the way down to one qubit and then processed it in subway. The most frequently used unit type in the pooling layer is max pooling, although there are other types as well. Similar to conventional feed-forward neural networks, the last module is a fully connected layer with full connections to all activations in the preceding layer. Translational invariance, which requires identical blocks of parameterized quantum gates within a layer, is a distinctive feature of the QCNN architecture. In the most general case of QML, both the learning device and the system under study, as well as their interaction, are fully quantum. This section gives a few examples of results on this topic. One class of problem that can benefit from the fully quantum approach is that of 'learning' unknown quantum states, processes or measurements, in the sense that one can subsequently reproduce them on another quantum system. For example, one may wish to learn a measurement that discriminates between two coherent states, given not a classical description of the states to be discriminated, but instead a set of example quantum systems prepared in these states. The naive approach would be to first extract a classical description of the states and then implement an ideal discriminating measurement based on this information. This would only require classical learning. However, one can show that a fully quantum approach is strictly superior in this case. (This also relates to work on quantum pattern matching.) The problem of learning unitary transformations can be approached in a similar way. Going beyond the specific problem of learning states and transformations, the task of clustering also admits a fully quantum version, wherein both the oracle which returns the distance between data-points and the information processing device which runs the algorithm are quantum. Finally, a general framework spanning supervised, unsupervised and reinforcement learning in the fully quantum setting was introduced in, where it was also shown that the possibility of probing the environment in superpositions permits a quantum speedup in reinforcement learning. Such a speedup in the reinforcement-learning paradigm has been experimentally demonstrated in a photonic setup. The need for models that can be understood by humans emerges in QML in analogy to classical machine learning and drives the research field of explainable QML (or XQML in analogy to XAI/XML). These efforts are often also referred to as Interpretable Machine Learning (IML, and by extension IQML). XQML/IQML can be considered as an alternative research direction instead of finding a quantum advantage. For example, XQML has been used in the context of mobile malware detection and classification. Quantum Shapley values have also been proposed to interpret gates within a circuit based on a game-theoretic approach. For this purpose, gates instead of features act as players in a coalitional game with a value function that depends on measurements of the quantum circuit of interest. Additionally, a quantum version of the classical technique known as LIME (Linear Interpretable Model-Agnostic Explanations) has also been proposed, known as Q-LIME. Quantum kernel methods have emerged as particularly promising approaches for near-term applications. Large-scale benchmarking studies encompassing over 20,000 trained models have provided comprehensive insights into the effectiveness of fidelity quantum kernels (FQKs) and projected quantum kernels (PQKs) across diverse classification and regression tasks. These studies have revealed universal patterns that guide effective quantum kernel method design. In the generative modeling domain, quantum generative adversarial networks and quantum circuit Born machines have shown particular promise for tabular data synthesis. Novel quantum generative models for tabular data have demonstrated performance improvements of 8.5% over leading classical models while using only 0.072% of the parameters, indicating significant potential for parameter-efficient learning. Classical learning applied to quantum problems The term "quantum machine learning" sometimes refers to classical machine learning performed on data from quantum systems. A basic example of this is quantum state tomography, where a quantum state is learned from measurement. Other applications include learning Hamiltonians and automatically generating quantum experiments. Quantum learning theory Quantum learning theory pursues a mathematical analysis of the quantum generalizations of classical learning models and of the possible speed-ups or other improvements that they may provide. The framework is very similar to that of classical computational learning theory, but the learner in this case is a quantum information processing device, while the data may be either classical or quantum. Quantum learning theory should be contrasted with the quantum-enhanced machine learning discussed above, where the goal was to consider specific problems and to use quantum protocols to improve the time complexity of classical algorithms for these problems. Although quantum learning theory is still under development, partial results in this direction have been obtained. The starting point in learning theory is typically a concept class, a set of possible concepts. Usually a concept is a function on some domain, such as { 0 , 1 } n {\displaystyle \{0,1\}^{n}} . For example, the concept class could be the set of disjunctive normal form (DNF) formulas on n bits or the set of Boolean circuits of some constant depth. The goal for the learner is to learn (exactly or approximately) an unknown target concept from this concept class. The learner may be actively interacting with the target concept, or passively receiving samples from it. In active learning, a learner can make membership queries to the target concept c, asking for its value c(x) on inputs x chosen by the learner. The learner then has to reconstruct the exact target concept, with high probability. In the model of quantum exact learning, the learner can make membership queries in quantum superposition. If the complexity of the learner is measured by the number of membership queries it makes, then quantum exact learners can be polynomially more efficient than classical learners for some concept classes, but not more. If complexity is measured by the amount of time the learner uses, then there are concept classes that can be learned efficiently by quantum learners but not by classical learners (under plausible complexity-theoretic assumptions). A natural model of passive learning is Valiant's probably approximately correct (PAC) learning. Here the learner receives random examples (x,c(x)), where x is distributed according to some unknown distribution D. The learner's goal is to output a hypothesis function h such that h(x)=c(x) with high probability when x is drawn according to D. The learner has to be able to produce such an 'approximately correct' h for every D and every target concept c in its concept class. We can consider replacing the random examples by potentially more powerful quantum examples ∑ x D ( x ) | x , c ( x ) ⟩ {\displaystyle \sum _{x}{\sqrt {D(x)}}|x,c(x)\rangle } . In the PAC model (and the related agnostic model), this doesn't significantly reduce the number of examples needed: for every concept class, classical and quantum sample complexity are the same up to constant factors. However, for learning under some fixed distribution D, quantum examples can be very helpful, for example for learning DNF under the uniform distribution. When considering time complexity, there exist concept classes that can be PAC-learned efficiently by quantum learners, even from classical examples, but not by classical learners (again, under plausible complexity-theoretic assumptions). This passive learning type is also the most common scheme in supervised learning: a learning algorithm typically takes the training examples fixed, without the ability to query the label of unlabelled examples. Outputting a hypothesis h is a step of induction. Classically, an inductive model splits into a training and an application phase: the model parameters are estimated in the training phase, and the learned model is applied an arbitrary many times in the application phase. In the asymptotic limit of the number of applications, this splitting of phases is also present with quantum resources. Implementations and experiments The earliest experiments were conducted using the adiabatic D-Wave quantum computer, for instance, to detect cars in digital images using regularized boosting with a nonconvex objective function in a demonstration in 2009. Many experiments followed on the same architecture, and leading tech companies have shown interest in the potential of QML for future technological implementations. In 2013, Google Research, NASA, and the Universities Space Research Association launched the Quantum Artificial Intelligence Lab which explores the use of the adiabatic D-Wave quantum computer. A more recent example trained a probabilistic generative models with arbitrary pairwise connectivity, showing that their model is capable of generating handwritten digits as well as reconstructing noisy images of bars and stripes and handwritten digits. Using a different annealing technology based on nuclear magnetic resonance (NMR), a quantum Hopfield network was implemented in 2009 that mapped the input data and memorized data to Hamiltonians, allowing the use of adiabatic quantum computation. NMR technology also enables universal quantum computing,[citation needed] and it was used for the first experimental implementation of a quantum support vector machine to distinguish hand written number '6' and '9' on a liquid-state quantum computer in 2015. The training data involved the pre-processing of the image which maps them to normalized 2-dimensional vectors to represent the images as the states of a qubit. The two entries of the vector are the vertical and horizontal ratio of the pixel intensity of the image. Once the vectors are defined on the feature space, the quantum support vector machine was implemented to classify the unknown input vector. The readout avoids costly quantum tomography by reading out the final state in terms of direction (up/down) of the NMR signal. Photonic implementations are attracting more attention, not the least because they do not require extensive cooling. Simultaneous spoken digit and speaker recognition and chaotic time-series prediction were demonstrated at data rates beyond 1 gigabyte per second in 2013. Using non-linear photonics to implement an all-optical linear classifier, a perceptron model was capable of learning the classification boundary iteratively from training data through a feedback rule. A core building block in many learning algorithms is to calculate the distance between two vectors: this was first experimentally demonstrated for up to eight dimensions using entangled qubits in a photonic quantum computer in 2015. Recently, based on a neuromimetic approach, a novel ingredient has been added to the field of QML, in the form of a so-called quantum memristor, a quantized model of the standard classical memristor. This device can be constructed by means of a tunable resistor, weak measurements on the system, and a classical feed-forward mechanism. An implementation of a quantum memristor in superconducting circuits has been proposed, and an experiment with quantum dots performed. A quantum memristor would implement nonlinear interactions in the quantum dynamics which would aid the search for a fully functional quantum neural network. Since 2016, IBM has launched an online cloud-based platform for quantum software developers, called the IBM Q Experience. This platform consists of several fully operational quantum processors accessible via the IBM Web API. In doing so, the company is encouraging software developers to pursue new algorithms through a development environment with quantum capabilities. New architectures are being explored on an experimental basis, up to 32 qubits, using both trapped-ion and superconductive quantum computing methods. In October 2019, it was noted that the introduction of Quantum Random Number Generators (QRNGs) to machine learning models including Neural Networks and Convolutional Neural Networks for random initial weight distribution and Random Forests for splitting processes had a profound effect on their ability when compared to the classical method of Pseudorandom Number Generators (PRNGs). However, in a more recent publication from 2021, these claims could not be reproduced for Neural Network weight initialization and no significant advantage of using QRNGs over PRNGs was found. The work also demonstrated that the generation of fair random numbers with a gate quantum computer is a non-trivial task on NISQ devices, and QRNGs are therefore typically much more difficult to use in practice than PRNGs. A paper published in December 2018 reported on an experiment using a trapped-ion system demonstrating a quantum speedup of the deliberation time of reinforcement learning agents employing internal quantum hardware. In March 2021, a team of researchers from Austria, The Netherlands, the US and Germany reported the experimental demonstration of a quantum speedup of the learning time of reinforcement learning agents interacting fully quantumly with the environment. The relevant degrees of freedom of both agent and environment were realized on a compact and fully tunable integrated nanophotonic processor. Barren plateau problem The barren plateau problem where quantum algorithms encounter flat optimization landscapes—has seen significant theoretical and practical advances in 2025. Los Alamos National Laboratory researchers have provided the first mathematical characterization of why and when barren plateaus occur in variational quantum algorithms, establishing theoretical guarantees for algorithm scalability. This work solves a key usability problem for quantum machine learning by providing rigorous theorems that predict whether a given architecture will remain trainable as it scales to larger quantum systems. The breakthrough eliminates the previous trial-and-error approach that had led to researcher fatigue in the field. Skepticism While machine learning itself is now not only a research field but an economically significant and fast growing industry and quantum computing is a well established field of both theoretical and experimental research, QML remains a purely theoretical field of studies. Attempts to experimentally demonstrate concepts of QML remain insufficient.[citation needed] Further, another obstacle exists at the prediction stage because the outputs of quantum learning models are inherently random. This creates an often considerable overhead, as many executions of a quantum learning model have to be aggregated to obtain an actual prediction. Many of the leading scientists that extensively publish in the field of QML warn about the extensive hype around the topic and are very restrained if asked about its practical uses in the foreseeable future. Sophia Chen collected some of the statements made by well known scientists in the field: See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Data_cleaning] | [TOKENS: 1021]
Contents Data cleansing Data cleansing or data cleaning is the process of identifying and correcting (or removing) corrupt, inaccurate, or irrelevant records from a dataset, table, or database. It involves detecting incomplete, incorrect, or inaccurate parts of the data and then replacing, modifying, or deleting the affected data. Data cleansing can be performed interactively using data wrangling tools, or through batch processing often via scripts or a data quality firewall. After cleansing, a data set should be consistent with other similar data sets in the system. The inconsistencies detected or removed may have been originally caused by user entry errors, by corruption in transmission or storage, or by different data dictionary definitions of similar entities in different stores. Data cleaning differs from data validation in that validation almost invariably means data is rejected from the system at entry and is performed at the time of entry, rather than on batches of data. The actual process of data cleansing may involve removing typographical errors or validating and correcting values against a known list of entities. The validation may be strict (such as rejecting any address that does not have a valid postal code), or with fuzzy or approximate string matching (such as correcting records that partially match existing, known records). Some data cleansing solutions will clean data by cross-checking with a validated data set. A common data cleansing practice is data enhancement, where data is made more complete by adding related information. For example, appending addresses with any phone numbers related to that address. Data cleansing may also involve harmonization (or normalization) of data, which is the process of bringing together data of "varying file formats, naming conventions, and columns", and transforming it into one cohesive data set; a simple example is the expansion of abbreviations ("st, rd, etc." to "street, road, etcetera"). Motivation Administratively incorrect, inconsistent data can lead to false conclusions and misdirect investments on both public and private scales. For instance, the government may want to analyze population census figures to decide which regions require further spending and investment on infrastructure and services. In this case, it will be important to have access to reliable data to avoid erroneous fiscal decisions. In the business world, incorrect data can be costly. Many companies use customer information databases that record data like contact information, addresses, and preferences. For instance, if the addresses are inconsistent, the company will suffer the cost of resending mail or even losing customers. Data quality High-quality data needs to pass a set of quality criteria. Those include: The term integrity encompasses accuracy, consistency and some aspects of validation (see also Data integrity) but is rarely used by itself in data-cleansing contexts because it is insufficiently specific. (For example, "referential integrity" is a term used to refer to the enforcement of foreign-key constraints above.) Process Good quality source data has to do with "Data Quality Culture" and must be initiated at the top of the organization. It is not just a matter of implementing strong validation checks on input screens, because almost no matter how strong these checks are, they can often still be circumvented by the users. There is a nine-step guide for organizations that wish to improve data quality: Others include: System The essential job of this system is to find a balance between fixing dirty data and maintaining the data as close as possible to the original data from the source production system. This is a challenge for the extract, transform, load architect. The system should offer an architecture that can cleanse data, record quality events and measure/control the quality of data in the data warehouse. A good start is to perform a thorough data profiling analysis that will help define the required complexity of the data cleansing system and also give an idea of the current data quality in the source system(s). Quality screens Part of the data cleansing system is a set of diagnostic filters known as quality screens. They each implement a test in the data flow that, if it fails, records an error in the Error Event Schema. Quality screens are divided into three categories: When a quality screen records an error, it can either stop the dataflow process, send the faulty data somewhere else than the target system or tag the data. The latter option is considered the best solution because the first option requires, that someone has to manually deal with the issue each time it occurs and the second implies that data are missing from the target system (integrity) and it is often unclear what should happen to these data. Criticism of existing tools and processes Most data cleansing tools have limitations in usability: Error event schema The error event schema holds records of all error events thrown by the quality screens. It consists of an error event fact table with foreign keys to three dimension tables that represent a date (when), batch job (where), and screen (who produced error). It also holds information about exactly when the error occurred and the severity of the error. Also, there is an error event detail fact table with a foreign key to the main table that contains detailed information about in which table, record and field the error occurred and the error condition. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Neuromorphic_engineering] | [TOKENS: 908]
Contents Neuromorphic computing Neuromorphic computing is a computing approach inspired by the human brain's structure and function. It uses artificial neurons to perform computations, mimicking neural systems for tasks such as perception, motor control, and multisensory integration. These systems, implemented in analog, digital, or mixed-mode VLSI, prioritize robustness, adaptability, and learning by emulating the brain’s distributed processing across small computing elements. This interdisciplinary field integrates biology, physics, mathematics, computer science, and electronic engineering to develop systems that emulate the brain’s morphology and computational strategies. Neuromorphic systems aim to enhance energy efficiency and computational power for applications including artificial intelligence, pattern recognition, and sensory processing. History Carver Mead proposed one of the first applications for neuromorphic engineering in the late 1980s. In 2006, researchers at Georgia Tech developed a field programmable neural array, a silicon-based chip modeling neuron channel-ion characteristics. In 2011, MIT researchers created a chip mimicking synaptic communication using 400 transistors and standard CMOS techniques. In 2012 HP Labs researchers reported that Mott memristors exhibit volatile behavior at low temperatures, enabling the creation of neuristors that mimic neuron behavior and support Turing machine components. Also in 2012, Purdue University researchers presented a neuromorphic chip design using lateral spin valves and memristors, noted for energy efficiency. The 2013 Blue Brain Project creates detailed digital models of rodent brains. Neurogrid, developed by Brains in Silicon at Stanford University, used 16 NeuroCore chips to emulate 65,536 neurons with high energy efficiency in 2014. The 2014 BRAIN Initiative and IBM’s TrueNorth chip contributed to neuromorphic advancements. The 2016 BrainScaleS project, a hybrid neuromorphic supercomputer at University of Heidelberg, operated 864 times faster than biological neurons. In 2017, Intel unveiled its Loihi chip, using an asynchronous artificial neural network for efficient learning and inference. Also in 2017 IMEC’s self-learning chip, based on OxRAM, demonstrated music composition by learning from minuets. In 2022, MIT researchers developed artificial synapses using protons for analog deep learning. In 2019, the European Union funded neuromorphic quantum computing to explore quantum operations using neuromorphic systems. Also in 2022, researchers at the Max Planck Institute for Polymer Research developed an organic artificial spiking neuron for in-situ neuromorphic sensing and biointerfacing. Researchers reported in 2024 that chemical systems in liquid solutions can detect sound at various wavelengths, offering potential for neuromorphic applications. Neurological inspiration Neuromorphic engineering emulates the brain’s structure and operations, focusing on the analog nature of biological computation and the role of neurons in cognition. The brain processes information via neurons using chemical signals, abstracted into mathematical functions. Neuromorphic systems distribute computation across small elements, similar to neurons, using methods guided by anatomical and functional neural maps from electron microscopy and neural connection studies. Implementation Neuromorphic systems employ hardware such as oxide-based memristors, spintronic memories, threshold switches, and transistors. Software implementations train spiking neural networks using error backpropagation. Neuromemristive systems use memristors to implement neuroplasticity, focusing on abstract neural network models rather than detailed biological mimicry. These systems enable applications in speech recognition, face recognition, and object recognition, and can replace conventional digital logic gates. The Caravelli-Traversa-Di Ventra equation describes memristive memory evolution, revealing tunneling phenomena and Lyapunov functions. Neuromorphic principles extend to sensors, such as the retinomorphic sensor or event camera, which mimic human vision by registering brightness changes individually, optimizing power consumption. An example of this applied to detecting light is the retinomorphic sensor or, when employed in an array, an event camera. Ethical considerations Neuromorphic systems raise the same ethical questions as those for other approaches to artificial intelligence. Daniel Lim argued that advanced neuromorphic systems could lead to machine consciousness, raising concerns about whether civil rights and other protocols should be extended to them. Legal debates, such as in Acohs Pty Ltd v. Ucorp Pty Ltd, question ownership of work produced by neuromorphic systems, as non-human-generated outputs may not be copyrightable. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Structured_prediction] | [TOKENS: 589]
Contents Structured prediction Structured prediction or structured output learning is an umbrella term for supervised machine learning techniques that involves predicting structured objects, rather than discrete or real values. Similar to commonly used supervised learning techniques, structured prediction models are typically trained by means of observed data in which the predicted value is compared to the ground truth, and this is used to adjust the model parameters. Due to the complexity of the model and the interrelations of predicted variables, the processes of model training and inference are often computationally infeasible, so approximate inference and learning methods are used. Applications An example application is the problem of translating a natural language sentence into a syntactic representation such as a parse tree. This can be seen as a structured prediction problem in which the structured output domain is the set of all possible parse trees. Structured prediction is used in a wide variety of domains including bioinformatics, natural language processing (NLP), speech recognition, and computer vision. Sequence tagging is a class of problems prevalent in NLP in which input data are often sequential, for instance sentences of text. The sequence tagging problem appears in several guises, such as part-of-speech tagging (POS tagging) and named entity recognition. In POS tagging, for example, each word in a sequence must be 'tagged' with a class label representing the type of word: The main challenge of this problem is to resolve ambiguity: in the above example, the words "sentence" and "tagged" in English can also be verbs. While this problem can be solved by simply performing classification of individual tokens, this approach does not take into account the empirical fact that tags do not occur independently; instead, each tag displays a strong conditional dependence on the tag of the previous word. This fact can be exploited in a sequence model such as a hidden Markov model or conditional random field that predicts the entire tag sequence for a sentence (rather than just individual tags) via the Viterbi algorithm. Techniques Probabilistic graphical models form a large class of structured prediction models. In particular, Bayesian networks and random fields are popular. Other algorithms and models for structured prediction include inductive logic programming, case-based reasoning, structured SVMs, Markov logic networks, Probabilistic Soft Logic, and constrained conditional models. The main techniques are: One of the easiest ways to understand algorithms for general structured prediction is the structured perceptron by Collins. This algorithm combines the perceptron algorithm for learning linear classifiers with an inference algorithm (classically the Viterbi algorithm when used on sequence data) and can be described abstractly as follows: In practice, finding the argmax over G E N ( x ) {\displaystyle {GEN}({x})} is done using an algorithm such as Viterbi or a max-sum, rather than an exhaustive search through an exponentially large set of candidates. The idea of learning is similar to that for multiclass perceptrons. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Reinforcement_learning] | [TOKENS: 6752]
Contents Reinforcement learning In machine learning and optimal control, reinforcement learning (RL) is concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning. While supervised learning and unsupervised learning algorithms respectively attempt to discover patterns in labeled and unlabeled data, reinforcement learning involves training an agent through interactions with its environment. To learn to maximize rewards from these interactions, the agent makes decisions between trying new actions to learn more about the environment (exploration), or using current knowledge of the environment to take the best action (exploitation). The search for the optimal balance between these two strategies is known as the exploration–exploitation dilemma. The environment is typically stated in the form of a Markov decision process, as many reinforcement learning algorithms use dynamic programming techniques. The main difference between classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the Markov decision process, and they target large Markov decision processes where exact methods become infeasible. Principles Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, RL is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in RL have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation (particularly in the absence of a mathematical model of the environment). Basic reinforcement learning is modeled as a Markov decision process: The purpose of reinforcement learning is for the agent to learn an optimal (or near-optimal) policy that maximizes the reward function or other user-provided reinforcement signal that accumulates from immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals learn to adopt behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning. A basic reinforcement learning agent interacts with its environment in discrete time steps. At each time step t, the agent receives the current state S t {\displaystyle S_{t}} and reward R t {\displaystyle R_{t}} . It then chooses an action A t {\displaystyle A_{t}} from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state S t + 1 {\displaystyle S_{t+1}} and the reward R t + 1 {\displaystyle R_{t+1}} associated with the transition ( S t , A t , S t + 1 ) {\displaystyle (S_{t},A_{t},S_{t+1})} is determined. The goal of a reinforcement learning agent is to learn a policy: π : S × A → [ 0 , 1 ] π ( s , a ) = Pr ( A t = a ∣ S t = s ) {\displaystyle {\begin{aligned}&\pi :{\mathcal {S}}\times {\mathcal {A}}\to [0,1]\\&\pi (s,a)=\Pr(A_{t}{=}a\mid S_{t}{=}s)\end{aligned}}} that maximizes the expected cumulative reward. Formulating the problem as a Markov decision process assumes the agent directly observes the current environmental state; in this case, the problem is said to have full observability. If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have partial observability, and formally the problem must be formulated as a partially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed. When the agent's performance is compared to that of an agent that acts optimally, the difference in performance yields the notion of regret. In order to act near optimally, the agent must reason about long-term consequences of its actions (i.e., maximize future rewards), although the immediate reward associated with this might be negative. Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including energy storage, robot control, photovoltaic generators, backgammon, checkers, Go (AlphaGo), and autonomous driving systems. Two elements make reinforcement learning powerful: the use of samples to optimize performance, and the use of function approximation to deal with large environments. Thanks to these two key components, RL can be used in large environments in the following situations: The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems. Exploration The trade-off between exploration and exploitation has been most thoroughly studied through the multi-armed bandit problem and for finite state space Markov decision processes in Burnetas and Katehakis (1997). Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical. One such method is ε {\displaystyle \varepsilon } -greedy, where 0 < ε < 1 {\displaystyle 0<\varepsilon <1} is a parameter controlling the amount of exploration vs. exploitation. With probability 1 − ε {\displaystyle 1-\varepsilon } , exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability ε {\displaystyle \varepsilon } , exploration is chosen, and the action is chosen uniformly at random. ε {\displaystyle \varepsilon } is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics. Algorithms for control learning Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards. The agent's action selection is modeled as a map called policy: π : A × S → [ 0 , 1 ] π ( a , s ) = Pr ( A t = a ∣ S t = s ) {\displaystyle {\begin{aligned}&\pi :{\mathcal {A}}\times {\mathcal {S}}\to [0,1]\\&\pi (a,s)=\Pr(A_{t}{=}a\mid S_{t}{=}s)\end{aligned}}} The policy map gives the probability of taking action a {\displaystyle a} when in state s {\displaystyle s} .: 61 There are also deterministic policies π {\displaystyle \pi } for which π ( s ) {\displaystyle \pi (s)} denotes the action that should be played at state s {\displaystyle s} . The state-value function V π ( s ) {\displaystyle V_{\pi }(s)} is defined as, expected discounted return starting with state s {\displaystyle s} , i.e. S 0 = s {\displaystyle S_{0}=s} , and successively following policy π {\displaystyle \pi } . Hence, roughly speaking, the value function estimates "how good" it is to be in a given state.: 60 V π ( s ) = E ⁡ [ G ∣ S 0 = s ] = E ⁡ [ ∑ t = 0 ∞ γ t R t + 1 ∣ S 0 = s ] , {\displaystyle V_{\pi }(s)=\operatorname {\mathbb {E} } [G\mid S_{0}{=}s]=\operatorname {\mathbb {E} } \left[\sum _{t=0}^{\infty }\gamma ^{t}R_{t+1}\mid S_{0}{=}s\right],} where the random variable G {\displaystyle G} denotes the discounted return, and is defined as the sum of future discounted rewards: G = ∑ t = 0 ∞ γ t R t + 1 = R 1 + γ R 2 + γ 2 R 3 + ⋯ , {\displaystyle G=\sum _{t=0}^{\infty }\gamma ^{t}R_{t+1}=R_{1}+\gamma R_{2}+\gamma ^{2}R_{3}+\cdots ,} where R t + 1 {\displaystyle R_{t+1}} is the reward for transitioning from state S t {\displaystyle S_{t}} to S t + 1 {\displaystyle S_{t+1}} , 0 ≤ γ < 1 {\displaystyle 0\leq \gamma <1} is the discount rate. γ {\displaystyle \gamma } is less than 1, so rewards in the distant future are weighted less than rewards in the immediate future. The algorithm must find a policy with maximum expected discounted return. From the theory of Markov decision processes it is known that, without loss of generality, the search can be restricted to the set of so-called stationary policies. A policy is stationary if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to deterministic stationary policies. A deterministic stationary policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality. The brute force approach entails two steps: One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the discounted return of each policy. These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search. Value function approaches attempt to find a policy that maximizes the discounted return by maintaining a set of estimates of expected discounted returns E ⁡ [ G ] {\displaystyle \operatorname {\mathbb {E} } [G]} for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one). These methods rely on the theory of Markov decision processes, where optimality is defined in a sense stronger than the one above: A policy is optimal if it achieves the best-expected discounted return from any initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found among stationary policies. To define optimality in a formal manner, define the state-value of a policy π {\displaystyle \pi } by V π ( s ) = E ⁡ [ G ∣ s , π ] , {\displaystyle V^{\pi }(s)=\operatorname {\mathbb {E} } [G\mid s,\pi ],} where G {\displaystyle G} stands for the discounted return associated with following π {\displaystyle \pi } from the initial state s {\displaystyle s} . Defining V ∗ ( s ) {\displaystyle V^{*}(s)} as the maximum possible state-value of V π ( s ) {\displaystyle V^{\pi }(s)} , where π {\displaystyle \pi } is allowed to change, V ∗ ( s ) = max π V π ( s ) . {\displaystyle V^{*}(s)=\max _{\pi }V^{\pi }(s).} A policy that achieves these optimal state-values in each state is called optimal. Clearly, a policy that is optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, since V ∗ ( s ) = max π E [ G ∣ s , π ] {\displaystyle V^{*}(s)=\max _{\pi }\mathbb {E} [G\mid s,\pi ]} , where s {\displaystyle s} is a state randomly sampled from the distribution μ {\displaystyle \mu } of initial states (so μ ( s ) = Pr ( S 0 = s ) {\displaystyle \mu (s)=\Pr(S_{0}=s)} ). Although state-values suffice to define optimality, it is useful to define action-values. Given a state s {\displaystyle s} , an action a {\displaystyle a} and a policy π {\displaystyle \pi } , the action-value of the pair ( s , a ) {\displaystyle (s,a)} under π {\displaystyle \pi } is defined by Q π ( s , a ) = E ⁡ [ G ∣ s , a , π ] , {\displaystyle Q^{\pi }(s,a)=\operatorname {\mathbb {E} } [G\mid s,a,\pi ],} where G {\displaystyle G} now stands for the random discounted return associated with first taking action a {\displaystyle a} in state s {\displaystyle s} and following π {\displaystyle \pi } , thereafter. The theory of Markov decision processes states that if π ∗ {\displaystyle \pi ^{*}} is an optimal policy, we act optimally (take the optimal action) by choosing the action from Q π ∗ ( s , ⋅ ) {\displaystyle Q^{\pi ^{*}}(s,\cdot )} with the highest action-value at each state, s {\displaystyle s} . The action-value function of such an optimal policy ( Q π ∗ {\displaystyle Q^{\pi ^{*}}} ) is called the optimal action-value function and is commonly denoted by Q ∗ {\displaystyle Q^{*}} . In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally. Assuming full knowledge of the Markov decision process, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions Q k {\displaystyle Q_{k}} ( k = 0 , 1 , 2 , … {\displaystyle k=0,1,2,\ldots } ) that converge to Q ∗ {\displaystyle Q^{*}} . Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) Markov decision processes. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces. Monte Carlo methods are used to solve reinforcement learning problems by averaging sample returns. Unlike methods that require full knowledge of the environment's dynamics, Monte Carlo methods rely solely on actual or simulated experience—sequences of states, actions, and rewards obtained from interaction with an environment. This makes them applicable in situations where the complete dynamics are unknown. Learning from actual experience does not require prior knowledge of the environment and can still lead to optimal behavior. When using simulated experience, only a model capable of generating sample transitions is required, rather than a full specification of transition probabilities, which is necessary for dynamic programming methods. Monte Carlo methods apply to episodic tasks, where experience is divided into episodes that eventually terminate. Policy and value function updates occur only after the completion of an episode, making these methods incremental on an episode-by-episode basis, though not on a step-by-step (online) basis. The term "Monte Carlo" generally refers to any method involving random sampling; however, in this context, it specifically refers to methods that compute averages from complete returns, rather than partial returns. These methods function similarly to the bandit algorithms, in which returns are averaged for each state-action pair. The key difference is that actions taken in one state affect the returns of subsequent states within the same episode, making the problem non-stationary. To address this non-stationarity, Monte Carlo methods use the framework of general policy iteration (GPI). While dynamic programming computes value functions using full knowledge of the Markov decision process, Monte Carlo methods learn these functions through sample returns. The value functions and policies interact similarly to dynamic programming to achieve optimality, first addressing the prediction problem and then extending to policy improvement and control, all based on sampled experience. The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of generalized policy iteration algorithms. Many actor-critic methods belong to this category. The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman equation. The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue. Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called λ {\displaystyle \lambda } parameter ( 0 ≤ λ ≤ 1 ) {\displaystyle (0\leq \lambda \leq 1)} that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue. In order to address the fifth issue, function approximation methods are used. Linear function approximation starts with a mapping ϕ {\displaystyle \phi } that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair ( s , a ) {\displaystyle (s,a)} are obtained by linearly combining the components of ϕ ( s , a ) {\displaystyle \phi (s,a)} with some weights θ {\displaystyle \theta } : Q ( s , a ) = ∑ i = 1 d θ i ϕ i ( s , a ) . {\displaystyle Q(s,a)=\sum _{i=1}^{d}\theta _{i}\phi _{i}(s,a).} The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored. Value iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many variants. Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems. The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency. An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization. The two approaches available are gradient-based and gradient-free methods. Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector θ {\displaystyle \theta } , let π θ {\displaystyle \pi _{\theta }} denote the policy associated to θ {\displaystyle \theta } . Defining the performance function by ρ ( θ ) = ρ π θ {\displaystyle \rho (\theta )=\rho ^{\pi _{\theta }}} under mild conditions this function will be differentiable as a function of the parameter vector θ {\displaystyle \theta } . If the gradient of ρ {\displaystyle \rho } was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams's REINFORCE method (which is known as the likelihood ratio method in the simulation-based optimization literature). A large class of methods avoids relying on gradient information. These include simulated annealing, cross-entropy search or methods of evolutionary computation. Many gradient-free methods can achieve (in theory and in the limit) a global optimum. Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, actor–critic methods have been proposed and performed well on various problems. Policy search methods have been used in the robotics context. Many policy search methods may get stuck in local optima (as they are based on local search). Finally, all of the above methods can be combined with algorithms that first learn a model of the Markov decision process, the probability of each next state given an action taken from an existing state. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions. Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and "replayed" to the learning algorithm. Model-based methods can be more computationally intensive than model-free approaches, and their utility can be limited by the extent to which the Markov decision process can be learnt. There are other ways to use models than to update a value function. For instance, in model predictive control the model is used to update the behavior directly. Theory Both the asymptotic and finite-sample behaviors of most algorithms are well understood. Algorithms with provably good online performance (addressing the exploration issue) are known. Efficient exploration of Markov decision processes is given in Burnetas and Katehakis (1997). Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations. For incremental algorithms, asymptotic convergence issues have been settled.[clarification needed] Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation). Research Research topics include: Comparison of key algorithms The following table lists the key algorithms for learning a policy depending on several criteria: Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment. This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning. Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies. By introducing fuzzy inference in reinforcement learning, approximating the state-action value function with fuzzy rules in continuous space becomes possible. The IF - THEN form of fuzzy rules make this approach suitable for expressing the results in a form close to natural language. Extending FRL with Fuzzy Rule Interpolation allows the use of reduced size sparse fuzzy rule-bases to emphasize cardinal rules (most important state-action values). In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal. One popular IRL paradigm is named maximum entropy inverse reinforcement learning (MaxEnt IRL). MaxEnt IRL estimates the parameters of a linear model of the reward function by maximizing the entropy of the probability distribution of observed trajectories subject to constraints related to matching expected feature counts. Recently it has been shown that MaxEnt IRL is a particular case of a more general framework named random utility inverse reinforcement learning (RU-IRL). RU-IRL is based on random utility theory and Markov decision processes. While prior IRL approaches assume that the apparent random behavior of an observed agent is due to it following a random policy, RU-IRL assumes that the observed agent follows a deterministic policy but randomness in observed behavior is due to the fact that an observer only has partial access to the features the observed agent uses in decision making. The utility function is modeled as a random variable to account for the ignorance of the observer regarding the features the observed agent actually considers in its utility function. Multi-objective reinforcement learning (MORL) is a form of reinforcement learning concerned with conflicting alternatives. It is distinct from multi-objective optimization in that it is concerned with agents acting in environments. Safe reinforcement learning (SRL) can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. An alternative approach is risk-averse reinforcement learning, where instead of the expected return, a risk-measure of the return is optimized, such as the conditional value at risk (CVaR). In addition to mitigating risk, the CVaR objective increases robustness to model uncertainties. However, CVaR optimization in risk-averse RL requires special care, to prevent gradient bias and blindness to success. Self-reinforcement learning (or self-learning), is a learning paradigm which does not use the concept of immediate reward R a ( s , s ′ ) {\displaystyle R_{a}(s,s')} after transition from s {\displaystyle s} to s ′ {\displaystyle s'} with action a {\displaystyle a} . It does not use an external reinforcement, it only uses the agent internal self-reinforcement. The internal self-reinforcement is provided by mechanism of feelings and emotions. In the learning process emotions are backpropagated by a mechanism of secondary reinforcement. The learning equation does not include the immediate reward, it only includes the state evaluation. The self-reinforcement algorithm updates a memory matrix W = ‖ w ( a , s ) ‖ {\displaystyle W=\|w(a,s)\|} such that in each iteration executes the following machine learning routine: Initial conditions of the memory are received as input from the genetic environment. It is a system with only one input (situation), and only one output (action, or behavior). Self-reinforcement (self-learning) was introduced in 1982 along with a neural network capable of self-reinforcement learning, named Crossbar Adaptive Array (CAA). The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence states. The system is driven by the interaction between cognition and emotion. In recent years[when?], reinforcement learning has become a significant concept in natural language processing (NLP), where tasks are often sequential decision-making rather than static classification. Reinforcement learning is where an agent take actions in an environment to maximize the accumulation of rewards. This framework is best fit for many NLP tasks, including dialogue generation, text summarization, and machine translation, where the quality of the output depends on optimizing long-term or human-centered goals rather than the prediction of single correct label. Early application of RL in NLP emerged in dialogue systems, where conversation was determined as a series of actions optimized for fluency and coherence. These early attempts, including policy gradient and sequence-level training techniques, laid a foundation for the broader application of reinforcement learning to other areas of NLP. A major breakthrough happened with the introduction of reinforcement learning from human feedback (RLHF), a method in which human feedback ratings are used to train a reward model that guides the RL agent. Unlike traditional rule-based or supervised systems, RLHF allows models to align their behavior with human judgments on complex and subjective tasks. This technique was initially used in the development of InstructGPT, an effective language model trained to follow human instructions and later in ChatGPT which incorporates RLHF for improving output responses and ensuring safety. More recently[when?], researchers have explored the use of offline RL in NLP to improve dialogue systems without the need of live human interaction. These methods optimize for user engagement, coherence, and diversity based on past conversation logs and pre-trained reward models. One example is DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. This model was trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step. Statistical comparison of reinforcement learning algorithms Efficient comparison of RL algorithms is essential for research, deployment and monitoring of RL systems. To compare different algorithms on a given environment, an agent can be trained for each algorithm. Since the performance is sensitive to implementation details, all algorithms should be implemented as closely as possible to each other. After the training is finished, the agents can be run on a sample of test episodes, and their scores (returns) can be compared. Since episodes are typically assumed to be i.i.d, standard statistical tools can be used for hypothesis testing, such as T-test and permutation test. This requires to accumulate all the rewards within an episode into a single number—the episodic return. However, this causes a loss of information, as different time-steps are averaged together, possibly with different levels of noise. Whenever the noise level varies across the episode, the statistical power can be improved significantly, by weighting the rewards according to their estimated noise. Challenges and limitations Despite significant advancements, reinforcement learning (RL) continues to face several challenges and limitations that hinder its widespread application in real-world scenarios. RL algorithms often require a large number of interactions with the environment to learn effective policies, leading to high computational costs and time-intensive to train the agent. For instance, OpenAI's Dota-playing bot utilized thousands of years of simulated gameplay to achieve human-level performance. Techniques like experience replay and curriculum learning have been proposed to deprive sample inefficiency, but these techniques add more complexity and are not always sufficient for real-world applications. Training RL models, particularly for deep neural network-based models, can be unstable and prone to divergence. A small change in the policy or environment can lead to extreme fluctuations in performance, making it difficult to achieve consistent results. This instability is further enhanced in the case of the continuous or high-dimensional action space, where the learning step becomes more complex and less predictable. The RL agents trained in specific environments often struggle to generalize their learned policies to new, unseen scenarios. This is the major setback preventing the application of RL to dynamic real-world environments where adaptability is crucial. The challenge is to develop such algorithms that can transfer knowledge across tasks and environments without extensive retraining. Designing appropriate reward functions is critical in RL because poorly designed reward functions can lead to unintended behaviors. In addition, RL systems trained on biased data may perpetuate existing biases and lead to discriminatory or unfair outcomes. Both of these issues requires careful consideration of reward structures and data sources to ensure fairness and desired behaviors. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Feature_engineering] | [TOKENS: 1044]
Contents Feature engineering Feature engineering is a preprocessing step in supervised machine learning and statistical modeling which transforms raw data into a more effective set of inputs. Each input comprises several attributes, known as features. By providing models with relevant information, feature engineering significantly enhances their predictive accuracy and decision-making capability. Beyond machine learning, the principles of feature engineering are applied in various scientific fields, including physics. For example, physicists construct dimensionless numbers such as the Reynolds number in fluid dynamics, the Nusselt number in heat transfer, and the Archimedes number in sedimentation. They also develop first approximations of solutions, such as analytical solutions for the strength of materials in mechanics. Clustering One of the applications of feature engineering has been clustering of feature-objects or sample-objects in a dataset. Especially, feature engineering based on matrix decomposition has been extensively used for data clustering under non-negativity constraints on the feature coefficients. These include Non-Negative Matrix Factorization (NMF), Non-Negative Matrix-Tri Factorization (NMTF), Non-Negative Tensor Decomposition/Factorization (NTF/NTD), etc. The non-negativity constraints on coefficients of the feature vectors mined by the above-stated algorithms yields a part-based representation, and different factor matrices exhibit natural clustering properties. Several extensions of the above-stated feature engineering methods have been reported in literature, including orthogonality-constrained factorization for hard clustering, and manifold learning to overcome inherent issues with these algorithms. Other classes of feature engineering algorithms include leveraging a common hidden structure across multiple inter-related datasets to obtain a consensus (common) clustering scheme. An example is Multi-view Classification based on Consensus Matrix Decomposition (MCMD), which mines a common clustering scheme across multiple datasets. MCMD is designed to output two types of class labels (scale-variant and scale-invariant clustering), and: Coupled matrix and tensor decompositions are popular in multi-view feature engineering. Predictive modelling Feature engineering in machine learning and statistical modeling involves selecting, creating, transforming, and extracting data features. Key components include feature creation from existing data, transforming and imputing missing or invalid features, reducing data dimensionality through methods like Principal Components Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA), and selecting the most relevant features for model training based on importance scores and correlation matrices. Features vary in significance. Even relatively insignificant features may contribute to a model. Feature selection can reduce the number of features to prevent a model from becoming too specific to the training data set (overfitting). Feature explosion occurs when the number of identified features is too large for effective model estimation or optimization. Common causes include: Feature explosion can be limited via techniques such as regularization, kernel methods, and feature selection. Automation Automation of feature engineering is a research topic that dates back to the 1990s. Machine learning software that incorporates automated feature engineering has been commercially available since 2016. Related academic literature can be roughly separated into two types: Multi-relational Decision Tree Learning (MRDTL) extends traditional decision tree methods to relational databases, handling complex data relationships across tables. It innovatively uses selection graphs as decision nodes, refined systematically until a specific termination criterion is reached. Most MRDTL studies base implementations on relational databases, which results in many redundant operations. These redundancies can be reduced by using techniques such as tuple id propagation. There are a number of open-source libraries and tools that automate feature engineering on relational data and time series: OneBM helps data scientists reduce data exploration time allowing them to try and error many ideas in short time. On the other hand, it enables non-experts, who are not familiar with data science, to quickly extract value from their data with a little effort, time, and cost. The deep feature synthesis (DFS) algorithm beat 615 of 906 human teams in a competition. Feature stores The feature store is where the features are stored and organized for the explicit purpose of being used to either train models (by data scientists) or make predictions (by applications that have a trained model). It is a central location where you can either create or update groups of features created from multiple different data sources, or create and update new datasets from those feature groups for training models or for use in applications that do not want to compute the features but just retrieve them when it needs them to make predictions. A feature store includes the ability to store code used to generate features, apply the code to raw data, and serve those features to models upon request. Useful capabilities include feature versioning and policies governing the circumstances under which features can be used. Feature stores can be standalone software tools or built into machine learning platforms. Alternatives Feature engineering can be a time-consuming and error-prone process, as it requires domain expertise and often involves trial and error. Deep learning algorithms may be used to process a large raw dataset without having to resort to feature engineering. However, deep learning algorithms still require careful preprocessing and cleaning of the input data. In addition, choosing the right architecture, hyperparameters, and optimization algorithm for a deep neural network can be a challenging and iterative process. See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Feature_learning] | [TOKENS: 3879]
Contents Feature learning In machine learning (ML), feature learning or representation learning is a set of techniques that allow a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that ML tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data, such as image, video, and sensor data, have not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. Feature learning can be either supervised, unsupervised, or self-supervised: Supervised Supervised feature learning is learning features from labeled data. The data label allows the system to compute an error term, the degree to which the system fails to produce the label, which can then be used as feedback to correct the learning process (reduce/minimize the error). Approaches include: Dictionary learning develops a set (dictionary) of representative elements from the input data such that each data point can be represented as a weighted sum of the representative elements. The dictionary elements and the weights may be found by minimizing the average representation error (over the input data), together with L1 regularization on the weights to enable sparsity (i.e., the representation of each data point has only a few nonzero weights). Supervised dictionary learning exploits both the structure underlying the input data and the labels for optimizing the dictionary elements. For example, this supervised dictionary learning technique applies dictionary learning on classification problems by jointly optimizing the dictionary elements, weights for representing data points, and parameters of the classifier based on the input data. In particular, a minimization problem is formulated, where the objective function consists of the classification error, the representation error, an L1 regularization on the representing weights for each data point (to enable sparse representation of data), and an L2 regularization on the parameters of the classifier. Neural networks are a family of learning algorithms that use a "network" consisting of multiple layers of inter-connected nodes. It is inspired by the animal nervous system, where the nodes are viewed as neurons and edges are viewed as synapses. Each edge has an associated weight, and the network defines computational rules for passing input data from the network's input layer to the output layer. A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights). Multilayer neural networks can be used to perform feature learning, since they learn a representation of their input at the hidden layer(s) which is subsequently used for classification or regression at the output layer. The most popular network architecture of this type is Siamese networks. Unsupervised Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data. Several approaches are introduced in the following. K-means clustering is a popular clustering method. In particular, given a set of n vectors, k-means clustering groups them into k clusters (i.e., subsets) in such a way that each vector belongs to the cluster with the closest mean. The problem is computationally NP-hard, although suboptimal greedy algorithms have been developed. K-means clustering can be used to group an unlabeled set of inputs into k clusters, and then use the centroids of these clusters to produce features. These features can be produced in several ways. The simplest is to add k binary features to each sample, where each feature j has value one iff the jth centroid learned by k-means is the closest to the sample under consideration. It is also possible to use the distances to the clusters as features, perhaps after transforming them through a radial basis function (a technique that has been used to train RBF networks). Coates and Ng note that certain variants of k-means behave similarly to sparse coding algorithms. In a comparative evaluation of unsupervised feature learning methods, Coates, Lee and Ng found that k-means clustering with an appropriate transformation outperforms the more recently invented auto-encoders and RBMs on an image classification task. K-means also improves performance in the domain of NLP, specifically for named-entity recognition; there, it competes with Brown clustering, as well as with distributed word representations (also known as neural word embeddings). Principal component analysis (PCA) is often used for dimension reduction. Given an unlabeled set of n input data vectors, PCA generates p (which is much smaller than the dimension of the input data) right singular vectors corresponding to the p largest singular values of the data matrix, where the kth row of the data matrix is the kth input data vector shifted by the sample mean of the input (i.e., subtracting the sample mean from the data vector). Equivalently, these singular vectors are the eigenvectors corresponding to the p largest eigenvalues of the sample covariance matrix of the input vectors. These p singular vectors are the feature vectors learned from the input data, and they represent directions along which the data has the largest variations. PCA is a linear feature learning approach since the p singular vectors are linear functions of the data matrix. The singular vectors can be generated via a simple algorithm with p iterations. In the ith iteration, the projection of the data matrix on the (i-1)th eigenvector is subtracted, and the ith singular vector is found as the right singular vector corresponding to the largest singular of the residual data matrix. PCA has several limitations. First, it assumes that the directions with large variance are of most interest, which may not be the case. PCA only relies on orthogonal transformations of the original data, and it exploits only the first- and second-order moments of the data, which may not well characterize the data distribution. Furthermore, PCA can effectively reduce dimension only when the input data vectors are correlated (which results in a few dominant eigenvalues). Local linear embedding (LLE) is a nonlinear learning approach for generating low-dimensional neighbor-preserving representations from (unlabeled) high-dimension input. The approach was proposed by Roweis and Saul (2000). The general idea of LLE is to reconstruct the original high-dimensional data using lower-dimensional points while maintaining some geometric properties of the neighborhoods in the original data set. LLE consists of two major steps. The first step is for "neighbor-preserving", where each input data point Xi is reconstructed as a weighted sum of K nearest neighbor data points, and the optimal weights are found by minimizing the average squared reconstruction error (i.e., difference between an input point and its reconstruction) under the constraint that the weights associated with each point sum up to one. The second step is for "dimension reduction," by looking for vectors in a lower-dimensional space that minimizes the representation error using the optimized weights in the first step. Note that in the first step, the weights are optimized with fixed data, which can be solved as a least squares problem. In the second step, lower-dimensional points are optimized with fixed weights, which can be solved via sparse eigenvalue decomposition. The reconstruction weights obtained in the first step capture the "intrinsic geometric properties" of a neighborhood in the input data. It is assumed that original data lie on a smooth lower-dimensional manifold, and the "intrinsic geometric properties" captured by the weights of the original data are also expected to be on the manifold. This is why the same weights are used in the second step of LLE. Compared with PCA, LLE is more powerful in exploiting the underlying data structure. Independent component analysis (ICA) is a technique for forming a data representation using a weighted sum of independent non-Gaussian components. The assumption of non-Gaussian is imposed since the weights cannot be uniquely determined when all the components follow Gaussian distribution. Unsupervised dictionary learning does not utilize data labels and exploits the structure underlying the data for optimizing dictionary elements. An example of unsupervised dictionary learning is sparse coding, which aims to learn basis functions (dictionary elements) for data representation from unlabeled input data. Sparse coding can be applied to learn overcomplete dictionaries, where the number of dictionary elements is larger than the dimension of the input data. Aharon et al. proposed algorithm K-SVD for learning a dictionary of elements that enables sparse representation. Multilayer/deep architectures The hierarchical architecture of the biological neural system inspires deep learning architectures for feature learning by stacking multiple layers of learning nodes. These architectures are often designed based on the assumption of distributed representation: observed data is generated by the interactions of many different factors on multiple levels. In a deep learning architecture, the output of each intermediate layer can be viewed as a representation of the original input data. Each level uses the representation produced by the previous, lower level as input, and produces new representations as output, which are then fed to higher levels. The input at the bottom layer is raw data, and the output of the final, highest layer is the final low-dimensional feature or representation. Restricted Boltzmann machines (RBMs) are often used as a building block for multilayer learning architectures. An RBM can be represented by an undirected bipartite graph consisting of a group of binary hidden variables, a group of visible variables, and edges connecting the hidden and visible nodes. It is a special case of the more general Boltzmann machines with the constraint of no intra-node connections. Each edge in an RBM is associated with a weight. The weights together with the connections define an energy function, based on which a joint distribution of visible and hidden nodes can be devised. Based on the topology of the RBM, the hidden (visible) variables are independent, conditioned on the visible (hidden) variables.[clarification needed] Such conditional independence facilitates computations. An RBM can be viewed as a single layer architecture for unsupervised feature learning. In particular, the visible variables correspond to input data, and the hidden variables correspond to feature detectors. The weights can be trained by maximizing the probability of visible variables using Hinton's contrastive divergence (CD) algorithm. In general, training RBMs by solving the maximization problem tends to result in non-sparse representations. Sparse RBM was proposed to enable sparse representations. The idea is to add a regularization term in the objective function of data likelihood, which penalizes the deviation of the expected hidden variables from a small constant p {\displaystyle p} . RBMs have also been used to obtain disentangled representations of data, where interesting features map to separate hidden units. An autoencoder consisting of an encoder and a decoder is a paradigm for deep learning architectures. An example is provided by Hinton and Salakhutdinov where the encoder uses raw data (e.g., image) as input and produces feature or representation as output and the decoder uses the extracted feature from the encoder as input and reconstructs the original input raw data as output. The encoder and decoder are constructed by stacking multiple layers of RBMs. The parameters involved in the architecture were originally trained in a greedy layer-by-layer manner: after one layer of feature detectors is learned, they are fed up as visible variables for training the corresponding RBM. Current approaches typically apply end-to-end training with stochastic gradient descent methods. Training can be repeated until some stopping criteria are satisfied. Self-supervised Self-supervised representation learning is learning features by training on the structure of unlabeled data rather than relying on explicit labels for an information signal. This approach has enabled the combined use of deep neural network architectures and larger unlabeled datasets to produce deep feature representations. Training tasks typically fall under the classes of either contrastive, generative or both. Contrastive representation learning trains representations for associated data pairs, called positive samples, to be aligned, while pairs with no relation, called negative samples, are contrasted. A larger portion of negative samples is typically necessary in order to prevent catastrophic collapse, which is when all inputs are mapped to the same representation. Generative representation learning tasks the model with producing the correct data to either match a restricted input or reconstruct the full input from a lower dimensional representation. A common setup for self-supervised representation learning of a certain data type (e.g. text, image, audio, video) is to pretrain the model using large datasets of general context, unlabeled data. Depending on the context, the result of this is either a set of representations for common data segments (e.g. words) which new data can be broken into, or a neural network able to convert each new data point (e.g. image) into a set of lower dimensional features. In either case, the output representations can then be used as an initialization in many different problem settings where labeled data may be limited. Specialization of the model to specific tasks is typically done with supervised learning, either by fine-tuning the model / representations with the labels as the signal, or freezing the representations and training an additional model which takes them as an input. Many self-supervised training schemes have been developed for use in representation learning of various modalities, often first showing successful application in text or image before being transferred to other data types. Word2vec is a word embedding technique which learns to represent words through self-supervision over each word and its neighboring words in a sliding window across a large corpus of text. The model has two possible training schemes to produce word vector representations, one generative and one contrastive. The first is word prediction given each of the neighboring words as an input. The second is training on the representation similarity for neighboring words and representation dissimilarity for random pairs of words. A limitation of word2vec is that only the pairwise co-occurrence structure of the data is used, and not the ordering or entire set of context words. More recent transformer-based representation learning approaches attempt to solve this with word prediction tasks. GPTs pretrain on next word prediction using prior input words as context, whereas BERT masks random tokens in order to provide bidirectional context. Other self-supervised techniques extend word embeddings by finding representations for larger text structures such as sentences or paragraphs in the input data. Doc2vec extends the generative training approach in word2vec by adding an additional input to the word prediction task based on the paragraph it is within, and is therefore intended to represent paragraph level context. The domain of image representation learning has employed many different self-supervised training techniques, including transformation, inpainting, patch discrimination and clustering. Examples of generative approaches are Context Encoders, which trains an AlexNet CNN architecture to generate a removed image region given the masked image as input, and iGPT, which applies the GPT-2 language model architecture to images by training on pixel prediction after reducing the image resolution. Many other self-supervised methods use siamese networks, which generate different views of the image through various augmentations that are then aligned to have similar representations. The challenge is avoiding collapsing solutions where the model encodes all images to the same representation. SimCLR is a contrastive approach which uses negative examples in order to generate image representations with a ResNet CNN. Bootstrap Your Own Latent (BYOL) removes the need for negative samples by encoding one of the views with a slow moving average of the model parameters as they are being modified during training. The goal of many graph representation learning techniques is to produce an embedded representation of each node based on the overall network topology. node2vec extends the word2vec training technique to nodes in a graph by using co-occurrence in random walks through the graph as the measure of association. Another approach is to maximize mutual information, a measure of similarity, between the representations of associated structures within the graph. An example is Deep Graph Infomax, which uses contrastive self-supervision based on mutual information between the representation of a "patch" around each node, and a summary representation of the entire graph. Negative samples are obtained by pairing the graph representation with either representations from another graph in a multigraph training setting, or corrupted patch representations in single graph training. With analogous results in masked prediction and clustering, video representation learning approaches are often similar to image techniques but must utilize the temporal sequence of video frames as an additional learned structure. Examples include VCP, which masks video clips and trains to choose the correct one given a set of clip options, and Xu et al., who train a 3D-CNN to identify the original order given a shuffled set of video clips. Self-supervised representation techniques have also been applied to many audio data formats, particularly for speech processing. Wav2vec 2.0 discretizes the audio waveform into timesteps via temporal convolutions, and then trains a transformer on masked prediction of random timesteps using a contrastive loss. This is similar to the BERT language model, except as in many SSL approaches to video, the model chooses among a set of options rather than over the entire word vocabulary. Self-supervised learning has also been used to develop joint representations of multiple data types. Approaches usually rely on some natural or human-derived association between the modalities as an implicit label, for instance video clips of animals or objects with characteristic sounds, or captions written to describe images. CLIP produces a joint image-text representation space by training to align image and text encodings from a large dataset of image-caption pairs using a contrastive loss. MERLOT Reserve trains a transformer-based encoder to jointly represent audio, subtitles and video frames from a large dataset of videos through 3 joint pretraining tasks: contrastive masked prediction of either audio or text segments given the video frames and surrounding audio and text context, along with contrastive alignment of video frames with their corresponding captions. Multimodal representation models are typically unable to assume direct correspondence of representations in the different modalities, since the precise alignment can often be noisy or ambiguous. For example, the text "dog" could be paired with many different pictures of dogs, and correspondingly a picture of a dog could be captioned with varying degrees of specificity. This limitation means that downstream tasks may require an additional generative mapping network between modalities to achieve optimal performance, such as in DALLE-2 for text to image generation. Dynamic Representation Learning Dynamic representation learning methods generate latent embeddings for dynamic systems such as dynamic networks. Since particular distance functions are invariant under particular linear transformations, different sets of embedding vectors can actually represent the same/similar information. Therefore, for a dynamic system, a temporal difference in its embeddings may be explained by misalignment of embeddings due to arbitrary transformations and/or actual changes in the system. Therefore, generally speaking, temporal embeddings learned via dynamic representation learning methods should be inspected for any spurious changes and be aligned before consequent dynamic analyses. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Multimodal_learning] | [TOKENS: 1357]
Contents Multimodal learning Multimodal learning is a type of deep learning that integrates and processes multiple types of data, referred to as modalities, such as text, audio, images, or video. This integration allows for a more holistic understanding of complex data, improving model performance in tasks like visual question answering, cross-modal retrieval, text-to-image generation, aesthetic ranking, and image captioning. Large multimodal models, such as Google Gemini and GPT-4o, have become increasingly popular since 2023, enabling increased versatility and a broader understanding of real-world phenomena. Motivation Data usually comes with different modalities which carry different information. For example, it is very common to caption an image to convey the information not presented in the image itself. Similarly, sometimes it is more straightforward to use an image to describe information which may not be obvious from text. As a result, if different words appear in similar images, then these words likely describe the same thing. Conversely, if a word is used to describe seemingly dissimilar images, then these images may represent the same object. Thus, in cases dealing with multi-modal data, it is important to use a model which is able to jointly represent the information such that the model can capture the combined information from different modalities. Multimodal transformers Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality. Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning. The LLaVA was a vision-language model composed of a language model (Vicuna-13B) and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned. Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like embedding vector of tokens in a standard transformer. Conformer and later Whisper follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like embedding vector of tokens in a standard transformer. Perceivers are a variant of transformers designed for multimodality. For image generation, notable architectures are DALL-E 1 (2021), Parti (2022), Phenaki (2023), and Muse (2023). Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image. Parti is an encoder–decoder transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image. Muse is an encoder-only transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted. Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video. Multimodality means having multiple modalities, where a "modality" refers to a type of input or output, such as video, image, audio, text, proprioception, etc. For example, Google PaLM model was fine-tuned into a multimodal model and applied to robotic control. LLaMA models have also been turned multimodal using the tokenization method, to allow image inputs, and video inputs. GPT-4o can process and generate text, audio and images. Such models are sometimes called large multimodal models (LMMs). A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoder E {\displaystyle E} . Make a small multilayer perceptron f {\displaystyle f} , so that for any image y {\displaystyle y} , the post-processed vector f ( E ( y ) ) {\displaystyle f(E(y))} has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability. This type of method, where embeddings from multiple modalities are fused and the predictor is trained on the combined embeddings, is called early fusion. Another method, called intermediate fusion, involves each modality being first processed independently to obtain modality-specific representations; then these intermediate representations are fused together. In general, cross-attention is used for integrating information from different modalities. As an example, the Flamingo model uses cross-attention layers to inject visual information into its pre-trained language model. Multimodal deep Boltzmann machines A Boltzmann machine is a type of stochastic neural network invented by Geoffrey Hinton and Terry Sejnowski in 1985. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield nets. They are named after the Boltzmann distribution in statistical mechanics. The units in Boltzmann machines are divided into two groups: visible units and hidden units. Each unit is like a neuron with a binary output that represents whether it is activated or not. General Boltzmann machines allow connection between any units. However, learning is impractical using general Boltzmann Machines because the computational time is exponential to the size of the machine[citation needed]. A more efficient architecture is called restricted Boltzmann machine where connection is only allowed between hidden unit and visible unit, which is described in the next section. Multimodal deep Boltzmann machines can process and learn from different types of information, such as images and text, simultaneously. This can notably be done by having a separate deep Boltzmann machine for each modality, for example one for images and one for text, joined at an additional top hidden layer. Applications Multimodal machine learning has numerous applications across various domains: See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm] | [TOKENS: 4618]
Contents k-nearest neighbors algorithm In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method. It was first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. Most often, it is used for classification, as a k-NN classifier, the output of which is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. The k-NN algorithm can also be generalized for regression. In k-NN regression, also known as nearest neighbor smoothing, the output is the property value for the object. This value is the average of the values of k nearest neighbors. If k = 1, then the output is simply assigned to the value of that single nearest neighbor, also known as nearest neighbor interpolation. For both classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that nearer neighbors contribute more to the average than distant ones. For example, a common weighting scheme consists of giving each neighbor a weight of 1/d, where d is the distance to the neighbor. The input consists of the k closest training examples in a data set. The neighbors are taken from a set of objects for which the class (for k-NN classification) or the object property value (for k-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. A peculiarity (sometimes even a disadvantage) of the k-NN algorithm is its sensitivity to the local structure of the data. In k-NN classification the function is only approximated locally and all computation is deferred until function evaluation. Since this algorithm relies on distance, if the features represent different physical units or come in vastly different scales, then feature-wise normalizing of the training data can greatly improve its accuracy. Statistical setting Suppose we have pairs ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , … , ( X n , Y n ) {\displaystyle (X_{1},Y_{1}),(X_{2},Y_{2}),\dots ,(X_{n},Y_{n})} taking values in R d × { 1 , 2 } {\displaystyle \mathbb {R} ^{d}\times \{1,2\}} , where Y is the class label of X, so that X | Y = r ∼ P r {\displaystyle X|Y=r\sim P_{r}} for r = 1 , 2 {\displaystyle r=1,2} (and probability distributions P r {\displaystyle P_{r}} ). Given some norm ‖ ⋅ ‖ {\displaystyle \|\cdot \|} on R d {\displaystyle \mathbb {R} ^{d}} and a point x ∈ R d {\displaystyle x\in \mathbb {R} ^{d}} , let ( X ( 1 ) , Y ( 1 ) ) , … , ( X ( n ) , Y ( n ) ) {\displaystyle (X_{(1)},Y_{(1)}),\dots ,(X_{(n)},Y_{(n)})} be a reordering of the training data such that ‖ X ( 1 ) − x ‖ ≤ ⋯ ≤ ‖ X ( n ) − x ‖ {\displaystyle \|X_{(1)}-x\|\leq \dots \leq \|X_{(n)}-x\|} . Algorithm The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples. In the classification phase, k is a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among the k training samples nearest to that query point. A commonly used distance metric for continuous variables is Euclidean distance. For discrete variables, such as for text classification, another metric can be used, such as the overlap metric (or Hamming distance). In the context of gene expression microarray data, for example, k-NN has been employed with correlation coefficients, such as Pearson and Spearman, as a metric. Often, the classification accuracy of k-NN can be improved significantly if the distance metric is learned with specialized algorithms such as large margin nearest neighbor or neighborhood components analysis. A drawback of the basic "majority voting" classification occurs when the class distribution is skewed. That is, examples of a more frequent class tend to dominate the prediction of the new example, because they tend to be common among the k nearest neighbors due to their large number. One way to overcome this problem is to weight the classification, taking into account the distance from the test point to each of its k nearest neighbors. The class (or value, in regression problems) of each of the k nearest points is multiplied by a weight proportional to the inverse of the distance from that point to the test point. Another way to overcome skew is by abstraction in data representation. For example, in a self-organizing map (SOM), each node is a representative (a center) of a cluster of similar points, regardless of their density in the original training data. k-NN can then be applied to the SOM. Parameter selection The best choice of k depends upon the data; generally, larger values of k reduces effect of the noise on the classification, but make boundaries between classes less distinct. A good k can be selected by various heuristic techniques (see hyperparameter optimization). The special case where the class is predicted to be the class of the closest training sample (i.e. when k = 1) is called the nearest neighbor algorithm. The accuracy of the k-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put into selecting or scaling features to improve classification. A particularly popular[citation needed] approach is the use of evolutionary algorithms to optimize feature scaling. Another popular approach is to scale features by the mutual information of the training data with the training classes.[citation needed] In binary (two class) classification problems, it is helpful to choose k to be an odd number as this avoids tied votes. One popular way of choosing the empirically optimal k in this setting is via bootstrap method. The 1-nearest neighbor classifier The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a point x to the class of its closest neighbour in the feature space, that is C n 1 n n ( x ) = Y ( 1 ) {\displaystyle C_{n}^{1nn}(x)=Y_{(1)}} . As the size of training data set approaches infinity, the one nearest neighbour classifier guarantees an error rate of no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data). The weighted nearest neighbour classifier The k-nearest neighbour classifier can be viewed as assigning the k nearest neighbours a weight 1 / k {\displaystyle 1/k} and all others 0 weight. This can be generalised to weighted nearest neighbour classifiers. That is, where the ith nearest neighbour is assigned a weight w n i {\displaystyle w_{ni}} , with ∑ i = 1 n w n i = 1 {\textstyle \sum _{i=1}^{n}w_{ni}=1} . An analogous result on the strong consistency of weighted nearest neighbour classifiers also holds. Let C n w n n {\displaystyle C_{n}^{wnn}} denote the weighted nearest classifier with weights { w n i } i = 1 n {\displaystyle \{w_{ni}\}_{i=1}^{n}} . Subject to regularity conditions, which in asymptotic theory are conditional variables which require assumptions to differentiate among parameters with some criteria. On the class distributions the excess risk has the following asymptotic expansion R R ( C n w n n ) − R R ( C Bayes ) = ( B 1 s n 2 + B 2 t n 2 ) { 1 + o ( 1 ) } , {\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{wnn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left(B_{1}s_{n}^{2}+B_{2}t_{n}^{2}\right)\{1+o(1)\},} for constants B 1 {\displaystyle B_{1}} and B 2 {\displaystyle B_{2}} where s n 2 = ∑ i = 1 n w n i 2 {\displaystyle s_{n}^{2}=\sum _{i=1}^{n}w_{ni}^{2}} and t n = n − 2 / d ∑ i = 1 n w n i { i 1 + 2 / d − ( i − 1 ) 1 + 2 / d } {\displaystyle t_{n}=n^{-2/d}\sum _{i=1}^{n}w_{ni}\left\{i^{1+2/d}-(i-1)^{1+2/d}\right\}} . The optimal weighting scheme { w n i ∗ } i = 1 n {\displaystyle \{w_{ni}^{*}\}_{i=1}^{n}} , that balances the two terms in the display above, is given as follows: set k ∗ = ⌊ B n 4 d + 4 ⌋ {\displaystyle k^{*}=\lfloor Bn^{\frac {4}{d+4}}\rfloor } , w n i ∗ = 1 k ∗ [ 1 + d 2 − d 2 k ∗ 2 / d { i 1 + 2 / d − ( i − 1 ) 1 + 2 / d } ] {\displaystyle w_{ni}^{*}={\frac {1}{k^{*}}}\left[1+{\frac {d}{2}}-{\frac {d}{2{k^{*}}^{2/d}}}\{i^{1+2/d}-(i-1)^{1+2/d}\}\right]} for i = 1 , 2 , … , k ∗ {\displaystyle i=1,2,\dots ,k^{*}} and w n i ∗ = 0 {\displaystyle w_{ni}^{*}=0} for i = k ∗ + 1 , … , n {\displaystyle i=k^{*}+1,\dots ,n} . With optimal weights the dominant term in the asymptotic expansion of the excess risk is O ( n − 4 d + 4 ) {\displaystyle {\mathcal {O}}(n^{-{\frac {4}{d+4}}})} . Similar results are true when using a bagged nearest neighbour classifier. Properties k-NN is a special case of a variable-bandwidth, kernel density "balloon" estimator with a uniform kernel. The naive version of the algorithm is easy to implement by computing the distances from the test example to all stored examples, but it is computationally intensive for large training sets. Using an approximate nearest neighbor search algorithm makes k-NN computationally tractable even for large data sets. Many nearest neighbor search algorithms have been proposed over the years; these generally seek to reduce the number of distance evaluations actually performed. k-NN has some strong consistency results. As the amount of data approaches infinity, the two-class k-NN algorithm is guaranteed to yield an error rate no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data). Various improvements to the k-NN speed are possible by using proximity graphs. For multi-class k-NN classification, Cover and Hart (1967) prove an upper bound error rate of R ∗ ≤ R k N N ≤ R ∗ ( 2 − M R ∗ M − 1 ) {\displaystyle R^{*}\ \leq \ R_{k\mathrm {NN} }\ \leq \ R^{*}\left(2-{\frac {MR^{*}}{M-1}}\right)} where R ∗ {\displaystyle R^{*}} is the Bayes error rate (which is the minimal error rate possible), R k N N {\displaystyle R_{kNN}} is the asymptotic k-NN error rate, and M is the number of classes in the problem. This bound is tight in the sense that both the lower and upper bounds are achievable by some distribution. For M = 2 {\displaystyle M=2} and as the Bayesian error rate R ∗ {\displaystyle R^{*}} approaches zero, this limit reduces to "not more than twice the Bayesian error rate". Error rates There are many results on the error rate of the k nearest neighbour classifiers. The k-nearest neighbour classifier is strongly (that is for any joint distribution on ( X , Y ) {\displaystyle (X,Y)} ) consistent provided k := k n {\displaystyle k:=k_{n}} diverges and k n / n {\displaystyle k_{n}/n} converges to zero as n → ∞ {\displaystyle n\to \infty } . Let C n k n n {\displaystyle C_{n}^{knn}} denote the k nearest neighbour classifier based on a training set of size n. Under certain regularity conditions, the excess risk yields the following asymptotic expansion R R ( C n k n n ) − R R ( C Bayes ) = { B 1 1 k + B 2 ( k n ) 4 / d } { 1 + o ( 1 ) } , {\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{knn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left\{B_{1}{\frac {1}{k}}+B_{2}\left({\frac {k}{n}}\right)^{4/d}\right\}\{1+o(1)\},} for some constants B 1 {\displaystyle B_{1}} and B 2 {\displaystyle B_{2}} . The choice k ∗ = ⌊ B n 4 d + 4 ⌋ {\displaystyle k^{*}=\left\lfloor Bn^{\frac {4}{d+4}}\right\rfloor } offers a trade off between the two terms in the above display, for which the k ∗ {\displaystyle k^{*}} -nearest neighbour error converges to the Bayes error at the optimal (minimax) rate O ( n − 4 d + 4 ) {\displaystyle {\mathcal {O}}\left(n^{-{\frac {4}{d+4}}}\right)} . Metric learning The K-nearest neighbor classification performance can often be significantly improved through (supervised) metric learning. Popular algorithms are neighbourhood components analysis and large margin nearest neighbor. Supervised metric learning algorithms use the label information to learn a new metric or pseudo-metric. Feature extraction When the input data to an algorithm is too large to be processed and it is suspected to be redundant (e.g. the same measurement in both feet and meters) then the input data will be transformed into a reduced representation set of features (also named features vector). Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. Feature extraction is performed on raw data prior to applying k-NN algorithm on the transformed data in feature space. An example of a typical computer vision computation pipeline for face recognition using k-NN including feature extraction and dimension reduction pre-processing steps (usually implemented with OpenCV): Dimension reduction For high-dimensional data (e.g., with number of dimensions more than 10) dimension reduction is usually performed prior to applying the k-NN algorithm in order to avoid the effects of the curse of dimensionality. The curse of dimensionality in the k-NN context basically means that Euclidean distance is unhelpful in high dimensions because all vectors are almost equidistant to the search query vector (imagine multiple points lying more or less on a circle with the query point at the center; the distance from the query to all data points in the search space is almost the same). Feature extraction and dimension reduction can be combined in one step using principal component analysis (PCA), linear discriminant analysis (LDA), or canonical correlation analysis (CCA) techniques as a pre-processing step, followed by clustering by k-NN on feature vectors in reduced-dimension space. This process is also called low-dimensional embedding. For very-high-dimensional datasets (e.g. when performing a similarity search on live video streams, DNA data or high-dimensional time series) running a fast approximate k-NN search using locality sensitive hashing, "random projections", "sketches" or other high-dimensional similarity search techniques from the VLDB toolbox might be the only feasible option. Decision boundary Nearest neighbor rules in effect implicitly compute the decision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity. Data reduction Data reduction is one of the most important problems for work with huge data sets. Usually, only some of the data points are needed for accurate classification. Those data are called the prototypes and can be found as follows: A training example surrounded by examples of other classes is called a class outlier. Causes of class outliers include: Class outliers with k-NN produce noise. They can be detected and separated for future analysis. Given two natural numbers, k>r>0, a training example is called a (k,r)NN class-outlier if its k nearest neighbors include more than r examples of other classes. Condensed nearest neighbor (CNN, the Hart algorithm) is an algorithm designed to reduce the data set for k-NN classification. It selects the set of prototypes U from the training data, such that 1NN with U can classify the examples almost as accurately as 1NN does with the whole data set. Given a training set X, CNN works iteratively: Use U instead of X for classification. The examples that are not prototypes are called "absorbed" points. It is efficient to scan the training examples in order of decreasing border ratio. The border ratio of a training example x is defined as where ‖x-y‖ is the distance to the closest example y having a different color than x, and ‖x'-y‖ is the distance from y to its closest example x' with the same label as x. The border ratio is in the interval [0,1] because ‖x'-y‖ never exceeds ‖x-y‖. This ordering gives preference to the borders of the classes for inclusion in the set of prototypes U. A point of a different label than x is called external to x. The calculation of the border ratio is illustrated by the figure on the right. The data points are labeled by colors: the initial point is x and its label is red. External points are blue and green. The closest to x external point is y. The closest to y red point is x' . The border ratio a(x) = ‖x'-y‖ / ‖x-y‖is the attribute of the initial point x. Below is an illustration of CNN in a series of figures. There are three classes (red, green and blue). Fig. 1: initially there are 60 points in each class. Fig. 2 shows the 1NN classification map: each pixel is classified by 1NN using all the data. Fig. 3 shows the 5NN classification map. White areas correspond to the unclassified regions, where 5NN voting is tied (for example, if there are two green, two red and one blue points among 5 nearest neighbors). Fig. 4 shows the reduced data set. The crosses are the class-outliers selected by the (3,2)NN rule (all the three nearest neighbors of these instances belong to other classes); the squares are the prototypes, and the empty circles are the absorbed points. The left bottom corner shows the numbers of the class-outliers, prototypes and absorbed points for all three classes. The number of prototypes varies from 15% to 20% for different classes in this example. Fig. 5 shows that the 1NN classification map with the prototypes is very similar to that with the initial data set. The figures were produced using the Mirkes applet. k-NN regression In k-NN regression, also known as k-NN smoothing, the k-NN algorithm is used for estimating continuous variables.[citation needed] One such algorithm uses a weighted average of the k nearest neighbors, weighted by the inverse of their distance. This algorithm works as follows: k-NN outlier The distance to the kth nearest neighbor can also be seen as a local density estimate and thus is also a popular outlier score in anomaly detection. The larger the distance to the k-NN, the lower the local density, the more likely the query point is an outlier. Although quite simple, this outlier model, along with another classic data mining method, local outlier factor, works quite well also in comparison to more recent and more complex approaches, according to a large scale experimental analysis. Validation of results A confusion matrix or "matching matrix" is often used as a tool to validate the accuracy of k-NN classification. More robust statistical methods such as likelihood-ratio test can also be applied.[how?] See also References Further reading
========================================
[SOURCE: https://he.wikipedia.org/wiki/ירוק] | [TOKENS: 1120]
תוכן עניינים ירוק ירוק הוא צבע המתקבל על ידי אור בתחום אורכי גל מקורב של 520–570 ננומטר (תדירות מקורבת בין 525 ל-575 טרה-הרץ) בספקטרום הנראה לעין. רגישותם של התאים קולטי האור לאחר הסתגלות לחשיכה (העקומה הסקוטופית) היא מרבית באורך גל של 507 ננומטר, בקירוב, המתאים לצבע כחול-ירוק, בעוד שלאחר הסתגלות לתנאי אור (העקומה הפוטופית), שיא הרגישות הוא באורך גל של כ-555 ננומטר, המתאים לירוק-צהבהב. צבע ירוק נחשב אחד מצבעי היסוד החיבוריים, יחד עם אדום וכחול. לעומת זאת, בתערובות צבעים חיסוריות מתקבל ירוק על ידי ערבוב של צבענים צהובים וציאנים. אטימולוגיה המילה "יָרֹק" דומה מאוד למילה waraq, שלפי מודל השפה הפרוטו-שמית תיארה הן את הצבע הירוק ואת הצבע הצהוב באם השפות השמיות. (לפי המודל, בפרוטו-שמית היו שמות רק לארבעה צבעים: לאדום-חום, לירוק-צהוב, ללבן ולשחור). היא גם דומה למילה הערבית, המתארת עלה. הקשר בין המלים העבריות "יָרֹק" ו"ירקות", דומה ליחס האנלוגי בין המילים הערביות המתארות את הצבע הירוק ואת הירקות. שימושים ומשמעויות סמליות הצבע הירוק מסמל את הטבע משום שלמרבית הצמחים צבע ירוק, הנובע מתכונות בליעת האור של מולקולות הכלורופיל שבתאי עליהם בעיקר. מאחר שירוק מסמל את הטבע הבריא עקב צבעם של הצמחים, הצבע והכינוי "ירוק" מסמלים דאגה לאיכות הסביבה ומצב האקולוגיה. קישורים חיצוניים הערות שוליים
========================================
[SOURCE: https://en.wikipedia.org/wiki/Bootstrap_aggregating] | [TOKENS: 2062]
Contents Bootstrap aggregating Bootstrap aggregating, also called bagging (from bootstrap aggregating) or bootstrapping, is a machine learning (ML) ensemble meta-algorithm designed to improve the stability and accuracy of ML classification and regression algorithms. It also reduces variance and overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method. Bagging is a special case of the ensemble averaging approach. Description of the technique Given a standard training set D {\displaystyle D} of size n {\displaystyle n} , bagging generates m {\displaystyle m} new training sets D i {\displaystyle D_{i}} , each of size n ′ {\displaystyle n'} , by sampling from D {\displaystyle D} uniformly and with replacement. By sampling with replacement, some observations may be repeated in each D i {\displaystyle D_{i}} . If n ′ = n {\displaystyle n'=n} , then for large n {\displaystyle n} the set D i {\displaystyle D_{i}} is expected to have the fraction (1 - 1/e) (~63.2%) of the unique samples of D {\displaystyle D} , the rest being duplicates. This kind of sample is known as a bootstrap sample. Sampling with replacement ensures each bootstrap is independent from its peers, as it does not depend on previous chosen samples when sampling. Then, m {\displaystyle m} models are fitted using the above bootstrap samples and combined by averaging the output (for regression) or voting (for classification). Bagging leads to "improvements for unstable procedures", which include, for example, artificial neural networks, classification and regression trees, and subset selection in linear regression. Bagging was shown to improve preimage learning. On the other hand, it can mildly degrade the performance of stable methods such as k-nearest neighbors. Process of the algorithm There are three types of datasets in bootstrap aggregating. These are the original, bootstrap, and out-of-bag datasets. Each section below will explain how each dataset is made except for the original dataset. The original dataset is whatever information is given. The bootstrap dataset is made by randomly picking objects from the original dataset. Also, it must be the same size as the original dataset. However, the difference is that the bootstrap dataset can have duplicate objects. Here is a simple example to demonstrate how it works along with the illustration below: Suppose the original dataset is a group of 12 people. Their names are Emily, Jessie, George, Constantine, Lexi, Theodore, John, James, Rachel, Anthony, Ellie, and Jamal. By randomly picking a group of names, let us say our bootstrap dataset had James, Ellie, Constantine, Lexi, John, Constantine, Theodore, Constantine, Anthony, Lexi, Constantine, and Theodore. In this case, the bootstrap sample contained four duplicates for Constantine, and two duplicates for Lexi, and Theodore. The out-of-bag dataset represents the remaining people who were not in the bootstrap dataset. It can be calculated by taking the difference between the original and the bootstrap datasets. In this case, the remaining samples who were not selected are Emily, Jessie, George, Rachel, and Jamal. Keep in mind that since both datasets are sets, when taking the difference the duplicate names are ignored in the bootstrap dataset. The illustration below shows how the math is done: Creating the bootstrap and out-of-bag datasets is crucial since it is used to test the accuracy of ensemble learning algorithms like random forest. For example, a model that produces 50 trees using the bootstrap/out-of-bag datasets will have a better accuracy than if it produced 10 trees. Since the algorithm generates multiple trees and therefore multiple datasets the chance that an object is left out of the bootstrap dataset is low. The next few sections talk about how the random forest algorithm works in more detail. The next step of the algorithm involves the generation of decision trees from the bootstrapped dataset. To achieve this, the process examines each gene/feature and determines for how many samples the feature's presence or absence yields a positive or negative result. This information is then used to compute a confusion matrix, which lists the true positives, false positives, true negatives, and false negatives of the feature when used as a classifier. These features are then ranked according to various classification metrics based on their confusion matrices. Some common metrics include estimate of positive correctness (calculated by subtracting false positives from true positives), measure of "goodness", and information gain. These features are then used to partition the samples into two sets: those that possess the top feature, and those that do not. The diagram below shows a decision tree of depth two being used to classify data. For example, a data point that exhibits Feature 1, but not Feature 2, will be given a "No". Another point that does not exhibit Feature 1, but does exhibit Feature 3, will be given a "Yes". This process is repeated recursively for successive levels of the tree until the desired depth is reached. At the very bottom of the tree, samples that test positive for the final feature are generally classified as positive, while those that lack the feature are classified as negative. These trees are then used as predictors to classify new data. The next part of the algorithm involves introducing yet another element of variability amongst the bootstrapped trees. In addition to each tree only examining a bootstrapped set of samples, only a small but consistent number of unique features are considered when ranking them as classifiers. This means that each tree only knows about the data pertaining to a small constant number of features, and a variable number of samples that is less than or equal to that of the original dataset. Consequently, the trees are more likely to return a wider array of answers, derived from more diverse knowledge. This results in a random forest, which possesses numerous benefits over a single decision tree generated without randomness. In a random forest, each tree "votes" on whether or not to classify a sample as positive based on its features. The sample is then classified based on majority vote. An example of this is given in the diagram below, where the four trees in a random forest vote on whether or not a patient with mutations A, B, F, and G has cancer. Since three out of four trees vote yes, the patient is then classified as cancer positive. Because of their properties, random forests are considered one of the most accurate data mining algorithms, are less likely to overfit their data, and run quickly and efficiently even for large datasets. They are primarily useful for classification as opposed to regression, which attempts to draw observed connections between statistical variables in a dataset. This makes random forests particularly useful in such fields as banking, healthcare, the stock market, and e-commerce where it is important to be able to predict future results based on past data. One of their applications would be as a useful tool for predicting cancer based on genetic factors, as seen in the above example. There are several important factors to consider when designing a random forest. If the trees in the random forests are too deep, overfitting can still occur due to over-specificity. If the forest is too large, the algorithm may become less efficient due to an increased runtime. Random forests also do not generally perform well when given sparse data with little variability. However, they still have numerous advantages over similar data classification algorithms such as neural networks, as they are much easier to interpret and generally require less data for training.[citation needed] As an integral component of random forests, bootstrap aggregating is very important to classification algorithms, and provides a critical element of variability that allows for increased accuracy when analyzing new data, as discussed below. Improving Random Forests and Bagging While the techniques described above utilize random forests and bagging (otherwise known as bootstrapping), there are certain techniques that can be used in order to improve their execution and voting time, their prediction accuracy, and their overall performance. The following are key steps in creating an efficient random forest: Algorithm (classification) For classification, use a training set D {\displaystyle D} , Inducer I {\displaystyle I} and the number of bootstrap samples m {\displaystyle m} as input. Generate a classifier C ∗ {\displaystyle C^{*}} as output Example: ozone data To illustrate the basic principles of bagging, below is an analysis on the relationship between ozone and temperature (data from Rousseeuw and Leroy[clarification needed] (1986), analysis done in R). The relationship between temperature and ozone appears to be nonlinear in this dataset, based on the scatter plot. To mathematically describe this relationship, LOESS smoothers (with bandwidth 0.5) are used. Rather than building a single smoother for the complete dataset, 100 bootstrap samples were drawn. Each sample is composed of a random subset of the original data and maintains a semblance of the master set's distribution and variability. For each bootstrap sample, a LOESS smoother was fit. Predictions from these 100 smoothers were then made across the range of the data. The black lines represent these initial predictions. The lines lack agreement in their predictions and tend to overfit their data points: evident by the wobbly flow of the lines. By taking the average of 100 smoothers, each corresponding to a subset of the original dataset, we arrive at one bagged predictor (red line). The red line's flow is stable and does not overly conform to any data point(s). Advantages and disadvantages Advantages: Disadvantages: History The concept of bootstrap aggregating is derived from the concept of bootstrapping which was developed by Bradley Efron. Bootstrap aggregating was proposed by Leo Breiman who also coined the abbreviated term "bagging" (bootstrap aggregating). Breiman developed the concept of bagging in 1994 to improve classification by combining classifications of randomly generated training sets. He argued, "If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy". See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Fuzzy_clustering] | [TOKENS: 1968]
Contents Fuzzy clustering Fuzzy clustering (also referred to as soft clustering or soft k-means) is a form of clustering in which each data point can belong to more than one cluster. Clustering or cluster analysis involves assigning data points to clusters such that items in the same cluster are as similar as possible, while items belonging to different clusters are as dissimilar as possible. Clusters are identified via similarity measures. These similarity measures include distance, connectivity, and intensity. Different similarity measures may be chosen based on the data or the application. Comparison to hard clustering In non-fuzzy clustering (also known as hard clustering), data are divided into distinct clusters, where each data point can only belong to exactly one cluster. In fuzzy clustering, data points can potentially belong to multiple clusters. For example, an apple can be red or green (hard clustering), but an apple can also be red AND green (fuzzy clustering). Here, the apple can be red to a certain degree as well as green to a certain degree. Instead of the apple belonging to green [green = 1] and not red [red = 0], the apple can belong to green [green = 0.5] and red [red = 0.5]. These value are normalized between 0 and 1; however, they do not represent probabilities, so the two values do not need to add up to 1. Membership Membership grades are assigned to each of the data points (tags). These membership grades indicate the degree to which data points belong to each cluster. Thus, points on the edge of a cluster, with lower membership grades, may be in the cluster to a lesser degree than points in the center of cluster. Fuzzy C-means clustering One of the most widely used fuzzy clustering algorithms is the Fuzzy C-means clustering (FCM) algorithm. Fuzzy c-means (FCM) clustering was developed by J.C. Dunn in 1973, and improved by J.C. Bezdek in 1981. The fuzzy c-means algorithm is very similar to the k-means algorithm: Any point x has a set of coefficients giving the degree of being in the kth cluster wk(x). With fuzzy c-means, the centroid of a cluster is the mean of all points, weighted by their degree of belonging to the cluster, or, mathematically, c k = ∑ x w k ( x ) m x ∑ x w k ( x ) m , {\displaystyle c_{k}={{\sum _{x}{w_{k}(x)}^{m}x} \over {\sum _{x}{w_{k}(x)}^{m}}},} where m is the hyper- parameter that controls how fuzzy the cluster will be. The higher it is, the fuzzier the cluster will be in the end. The FCM algorithm attempts to partition a finite collection of n {\displaystyle n} elements X = { x 1 , . . . , x n } {\displaystyle X=\{\mathbf {x} _{1},...,\mathbf {x} _{n}\}} into a collection of c fuzzy clusters with respect to some given criterion. Given a finite set of data, the algorithm returns a list of c {\displaystyle c} cluster centres C = { c 1 , . . . , c c } {\displaystyle C=\{\mathbf {c} _{1},...,\mathbf {c} _{c}\}} and a partition matrix W = w i , j ∈ [ 0 , 1 ] , i = 1 , . . . , n , j = 1 , . . . , c {\displaystyle W=w_{i,j}\in [0,1],\;i=1,...,n,\;j=1,...,c} , where each element, w i j {\displaystyle w_{ij}} , tells the degree to which element, x i {\displaystyle \mathbf {x} _{i}} , belongs to cluster c j {\displaystyle \mathbf {c} _{j}} . The FCM aims to minimize an objective function: where: K-means clustering also attempts to minimize the objective function shown above, except that in K-means, the membership values are either zero or one, and cannot take values in between, i.e. w i j ∈ { 0 , 1 } {\displaystyle w_{ij}\in \{0,1\}} . In Fuzzy C-means, the degree of fuzziness is parametrized by m ∈ ( 1 , ∞ ) {\displaystyle m\in (1,\infty )} , where a larger m {\displaystyle m} results in fuzzier clusters. In the limit m → 1 {\displaystyle m\rightarrow 1} , the memberships, w i j {\displaystyle w_{ij}} , converge to 0 or 1, and the Fuzzy C-means objective coincides with that of K-means. In the absence of experimentation or domain knowledge, m {\displaystyle m} is commonly set to 2. The algorithm minimizes intra-cluster variance as well, but has the same problems as 'k'-means; the minimum is a local minimum, and the results depend on the initial choice of weights. There are several implementations of this algorithm that are publicly available. Related algorithms Fuzzy C-means (FCM) with automatically determined for the number of clusters could enhance the detection accuracy. Using a mixture of Gaussians along with the expectation-maximization algorithm is a more statistically formalized method which includes some of these ideas: partial membership in classes. Example To better understand this principle, a classic example of mono-dimensional data is given below on an x axis. This data set can be traditionally grouped into two clusters. By selecting a threshold on the x-axis, the data is separated into two clusters. The resulting clusters are labelled 'A' and 'B', as seen in the following image. Each point belonging to the data set would therefore have a membership coefficient of 1 or 0. This membership coefficient of each corresponding data point is represented by the inclusion of the y-axis. In fuzzy clustering, each data point can have membership to multiple clusters. By relaxing the definition of membership coefficients from strictly 1 or 0, these values can range from any value from 1 to 0. The following image shows the data set from the previous clustering, but now fuzzy c-means clustering is applied. First, a new threshold value defining two clusters may be generated. Next, new membership coefficients for each data point are generated based on clusters centroids, as well as distance from each cluster centroid. As one can see, the middle data point belongs to cluster A and cluster B. the value of 0.3 is this data point's membership coefficient for cluster A . Applications and Extensions Clustering problems have applications in surface science, biology, medicine, psychology, economics, and many other disciplines. Fuzzy c-means has been extended in various ways (Gustafson-Kessel, Gath-Geva, possibilistic c-means, kernel-based fuzzy c-means, and partially provided centroids fuzzy c-means. In the field of bioinformatics, clustering is used for a number of applications. One use is as a pattern recognition technique to analyze gene expression data from RNA-sequencing data or other technologies. In this case, genes with similar expression patterns are grouped into the same cluster, and different clusters display distinct, well-separated patterns of expression. Use of clustering can provide insight into gene function and regulation. Because fuzzy clustering allows genes to belong to more than one cluster, it allows for the identification of genes that are conditionally co-regulated or co-expressed. For example, one gene may be acted on by more than one transcription factor, and one gene may encode a protein that has more than one function. Thus, fuzzy clustering is more appropriate than hard clustering. Fuzzy c-means has been a very important tool for image processing in clustering objects in an image. In the 1970s, mathematicians introduced the spatial term into the FCM algorithm to improve the accuracy of clustering under noise. Furthermore, FCM algorithms have been used to distinguish between different activities using image-based features such as the Hu and the Zernike Moments. Alternatively, A fuzzy logic model can be described on fuzzy sets that are defined on three components of the HSL color space HSL and HSV; The membership functions aim to describe colors follow the human intuition of color identification. In marketing, customers can be grouped into fuzzy clusters based on their needs, brand choices, psycho-graphic profiles, or other marketing related partitions.[citation needed] Image processing example Image segmentation using k-means clustering algorithms has long been used for pattern recognition, object detection, and medical imaging. However, due to real world limitations such as noise, shadowing, and variations in cameras, traditional hard clustering is often unable to reliably perform image processing tasks as stated above.[citation needed] Fuzzy clustering has been proposed as a more applicable algorithm in the performance to these tasks. Given is gray scale image that has undergone fuzzy clustering in Matlab. The original image is seen next to a clustered image. Colors are used to give a visual representation of the three distinct clusters used to identify the membership of each pixel. Below, a chart is given that defines the fuzzy membership coefficients of their corresponding intensity values. Depending on the application for which the fuzzy clustering coefficients are to be used, different pre-processing techniques can be applied to RGB images. RGB to HCL conversion is common practice. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Perceptron] | [TOKENS: 6506]
Contents Perceptron In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. History The artificial neuron network was invented in 1943 by Warren McCulloch and Walter Pitts in A logical calculus of the ideas immanent in nervous activity. In 1957, Frank Rosenblatt was at the Cornell Aeronautical Laboratory. He simulated the perceptron on an IBM 704. Later, he obtained funding by the Information Systems Branch of the United States Office of Naval Research and the Rome Air Development Center, to build a custom-made computer, the Mark I Perceptron. It was first publicly demonstrated on 23 June 1960. The machine was "part of a previously secret four-year NPIC [the US' National Photographic Interpretation Center] effort from 1963 through 1966 to develop this algorithm into a useful tool for photo-interpreters". Rosenblatt described the details of the perceptron in a 1958 paper. His organization of a perceptron is constructed of three kinds of cells ("units"): AI, AII, R, which stand for "projection", "association" and "response". He presented at the first international symposium on AI, Mechanisation of Thought Processes, which took place in 1958 November. Rosenblatt's project was funded under Contract Nonr-401(40) "Cognitive Systems Research Program", which lasted from 1959 to 1970, and Contract Nonr-2381(00) "Project PARA" ("PARA" means "Perceiving and Recognition Automata"), which lasted from 1957 to 1963. In 1959, the Institute for Defense Analysis awarded his group a $10,000 contract. By September 1961, the ONR awarded further $153,000 worth of contracts, with $108,000 committed for 1962. The ONR research manager, Marvin Denicoff, stated that ONR, instead of ARPA, funded the Perceptron project, because the project was unlikely to produce technological results in the near or medium term. Funding from ARPA go up to the order of millions dollars, while from ONR are on the order of 10,000 dollars. Meanwhile, the head of IPTO at ARPA, J.C.R. Licklider, was interested in 'self-organizing', 'adaptive' and other biologically-inspired methods in the 1950s; but by the mid-1960s he was openly critical of these, including the perceptron. Instead he strongly favored the logical AI approach of Simon and Newell. The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the Mark I Perceptron with the project name "Project PARA", designed for image recognition. The machine is currently in Smithsonian National Museum of American History. The Mark I Perceptron had three layers. One version was implemented as follows: Rosenblatt called this three-layered perceptron network the alpha-perceptron, to distinguish it from other perceptron models he experimented with. The S-units are connected to the A-units randomly (according to a table of random numbers) via a plugboard (see photo), to "eliminate any particular intentional bias in the perceptron". The connection weights are fixed, not learned. Rosenblatt was adamant about the random connections, as he believed the retina was randomly connected to the visual cortex, and he wanted his perceptron machine to resemble human visual perception. The A-units are connected to the R-units, with adjustable weights encoded in potentiometers, and weight updates during learning were performed by electric motors.: 193 The hardware details are in an operators' manual. In a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." The Photo Division of Central Intelligence Agency, from 1960 to 1964, studied the use of Mark I Perceptron machine for recognizing militarily interesting silhouetted targets (such as planes and ships) in aerial photos. Rosenblatt described his experiments with many variants of the Perceptron machine in a book Principles of Neurodynamics (1962). The book is a published version of the 1961 report. Among the variants are: The machine was shipped from Cornell to Smithsonian in 1967, under a government transfer administered by the Office of Naval Research. Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of neural network research to stagnate for many years, before it was recognised that a feedforward neural network with two or more layers (also called a multilayer perceptron) had greater processing power than perceptrons with one layer (also called a single-layer perceptron). Single-layer perceptrons are only capable of learning linearly separable patterns. For a classification task with some step activation function, a single node will have a single line dividing the data points forming the patterns. More nodes can create more dividing lines, but those lines must somehow be combined to form more complex classifications. A second layer of perceptrons, or even linear nodes, are sufficient to solve many otherwise non-separable problems. In 1969, a famous book entitled Perceptrons by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function. It is often incorrectly believed that they also conjectured that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR function. (See the page on Perceptrons (book) for more information.) Nevertheless, the often-miscited Minsky and Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s.[verification needed] This text was reprinted in 1987 as "Perceptrons - Expanded Edition" where some errors in the original text are shown and corrected. Rosenblatt continued working on perceptrons despite diminishing funding. The last attempt was Tobermory, built between 1961 and 1967, built for speech recognition. It occupied an entire room. It had 4 layers with 12,000 weights implemented by toroidal magnetic cores. By the time of its completion, simulation on digital computers had become faster than purpose-built perceptron machines. He died in a boating accident in 1971. A simulation program for neural networks was written for IBM 7090/7094, and was used to study various pattern recognition applications, such as character recognition, particle tracks in bubble-chamber photographs; phoneme, isolated word, and continuous speech recognition; speaker verification; and center-of-attention mechanisms for image processing. The kernel perceptron algorithm was already introduced in 1964 by Aizerman et al. Margin bounds guarantees were given for the Perceptron algorithm in the general non-separable case first by Freund and Schapire (1998), and more recently by Mohri and Rostamizadeh (2013) who extend previous results and give new and more favorable L1 bounds. The perceptron is a simplified model of a biological neuron. While the complexity of biological neuron models is often required to fully understand neural behavior, research suggests a perceptron-like linear model can produce some behavior seen in real neurons. The solution spaces of decision boundaries for all binary functions and learning behaviors are studied in. Definition In the modern sense, the perceptron is an algorithm for learning a binary classifier called a threshold function: a function that maps its input x {\displaystyle \mathbf {x} } (a real-valued vector) to an output value f ( x ) {\displaystyle f(\mathbf {x} )} (a single binary value): f ( x ) = h ( w ⋅ x + b ) {\displaystyle f(\mathbf {x} )=h(\mathbf {w} \cdot \mathbf {x} +b)} where h {\displaystyle h} is the Heaviside step-function (where an input of > 0 {\textstyle >0} outputs 1; otherwise 0 is the output ), w {\displaystyle \mathbf {w} } is a vector of real-valued weights, w ⋅ x {\displaystyle \mathbf {w} \cdot \mathbf {x} } is the dot product ∑ i = 1 m w i x i {\textstyle \sum _{i=1}^{m}w_{i}x_{i}} , where m is the number of inputs to the perceptron, and b is the bias. The bias shifts the decision boundary away from the origin and does not depend on any input value. Equivalently, since w ⋅ x + b = ( w , b ) ⋅ ( x , 1 ) {\displaystyle \mathbf {w} \cdot \mathbf {x} +b=(\mathbf {w} ,b)\cdot (\mathbf {x} ,1)} , we can add the bias term b {\displaystyle b} as another weight w m + 1 {\displaystyle \mathbf {w} _{m+1}} and add a coordinate 1 {\displaystyle 1} to each input x {\displaystyle \mathbf {x} } , and then write it as a linear classifier that passes the origin: f ( x ) = h ( w ⋅ x ) {\displaystyle f(\mathbf {x} )=h(\mathbf {w} \cdot \mathbf {x} )} The binary value of f ( x ) {\displaystyle f(\mathbf {x} )} (0 or 1) is used to perform binary classification on x {\displaystyle \mathbf {x} } as either a positive or a negative instance. Spatially, the bias shifts the position (though not the orientation) of the planar decision boundary. In the context of neural networks, a perceptron is an artificial neuron using the Heaviside step function as the activation function. The perceptron algorithm is also termed the single-layer perceptron, to distinguish it from a multilayer perceptron, which is a misnomer for a more complicated neural network. As a linear classifier, the single-layer perceptron is the simplest feedforward neural network. Power of representation From an information theory point of view, a single perceptron with K inputs has a capacity of 2K bits of information. This result is due to Thomas Cover. Specifically let T ( N , K ) {\displaystyle T(N,K)} be the number of ways to linearly separate N points in K dimensions, then T ( N , K ) = { 2 N K ≥ N 2 ∑ k = 0 K − 1 ( N − 1 k ) K < N {\displaystyle T(N,K)=\left\{{\begin{array}{cc}2^{N}&K\geq N\\2\sum _{k=0}^{K-1}\left({\begin{array}{c}N-1\\k\end{array}}\right)&K<N\end{array}}\right.} When K is large, T ( N , K ) / 2 N {\displaystyle T(N,K)/2^{N}} is very close to one when N ≤ 2 K {\displaystyle N\leq 2K} , but very close to zero when N > 2 K {\displaystyle N>2K} . In words, one perceptron unit can almost certainly memorize a random assignment of binary labels on N points when N ≤ 2 K {\displaystyle N\leq 2K} , but almost certainly not when N > 2 K {\displaystyle N>2K} . When operating on only binary inputs, a perceptron is called a linearly separable Boolean function, or threshold Boolean function. The sequence of numbers of threshold Boolean functions on n inputs is OEIS A000609. The value is only known exactly up to n = 9 {\displaystyle n=9} case, but the order of magnitude is known quite exactly: it has upper bound 2 n 2 − n log 2 ⁡ n + O ( n ) {\displaystyle 2^{n^{2}-n\log _{2}n+O(n)}} and lower bound 2 n 2 − n log 2 ⁡ n − O ( n ) {\displaystyle 2^{n^{2}-n\log _{2}n-O(n)}} . Any Boolean linear threshold function can be implemented with only integer weights. Furthermore, the number of bits necessary and sufficient for representing a single integer weight parameter is Θ ( n ln ⁡ n ) {\displaystyle \Theta (n\ln n)} . A single perceptron can learn to classify any half-space. It cannot solve any linearly nonseparable vectors, such as the Boolean exclusive-or problem (the famous "XOR problem"). A perceptron network with one hidden layer can learn to classify any compact subset arbitrarily closely. Similarly, it can also approximate any compactly-supported continuous function arbitrarily closely. This is essentially a special case of the theorems by George Cybenko and Kurt Hornik. Perceptrons (Minsky and Papert, 1969) studied the kind of perceptron networks necessary to learn various Boolean functions. Consider a perceptron network with n {\displaystyle n} input units, one hidden layer, and one output, similar to the Mark I Perceptron machine. It computes a Boolean function of type f : 2 n → 2 {\displaystyle f:2^{n}\to 2} . They call a function conjunctively local of order k {\displaystyle k} , iff there exists a perceptron network such that each unit in the hidden layer connects to at most k {\displaystyle k} input units. Theorem. (Theorem 3.1.1): The parity function is conjunctively local of order n {\displaystyle n} . Theorem. (Section 5.5): The connectedness function is conjunctively local of order Ω ( n 1 / 2 ) {\displaystyle \Omega (n^{1/2})} . Learning algorithm for a single-layer perceptron Below is an example of a learning algorithm for a single-layer perceptron with a single output unit. For a single-layer perceptron with multiple output units, since the weights of one output unit are completely separate from all the others', the same algorithm can be run for each output unit. For multilayer perceptrons, where a hidden layer exists, more sophisticated algorithms such as backpropagation must be used. If the activation function or the underlying process being modeled by the perceptron is nonlinear, alternative learning algorithms such as the delta rule can be used as long as the activation function is differentiable. Nonetheless, the learning algorithm described in the steps below will often work, even for multilayer perceptrons with nonlinear activation functions. When multiple perceptrons are combined in an artificial neural network, each output neuron operates independently of all the others; thus, learning each output can be considered in isolation. We first define some variables: We show the values of the features as follows: To represent the weights: To show the time-dependence of w {\displaystyle \mathbf {w} } , we use: For offline learning, the second step may be repeated until the iteration error 1 s ∑ j = 1 s | d j − y j ( t ) | {\displaystyle {\frac {1}{s}}\sum _{j=1}^{s}|d_{j}-y_{j}(t)|} is less than a user-specified error threshold γ {\displaystyle \gamma } , or a predetermined number of iterations have been completed, where s is again the size of the sample set. The algorithm updates the weights after every training sample in step 2b, though one may note that the weights remain unchanged whenever d j = y j ( t ) {\displaystyle d_{j}=y_{j}(t)} . A single perceptron is a linear classifier. It can only reach a stable state if all input vectors are classified correctly. In case the training set D is not linearly separable, i.e. if the positive examples cannot be separated from the negative examples by a hyperplane, then the algorithm would not converge since there is no solution. Hence, if linear separability of the training set is not known a priori, one of the training variants below should be used. Detailed analysis and extensions to the convergence theorem are in Chapter 11 of Perceptrons (1969). Linear separability is testable in time min ( O ( n d / 2 ) , O ( d 2 n ) , O ( n d − 1 ln ⁡ n ) ) {\displaystyle \min(O(n^{d/2}),O(d^{2n}),O(n^{d-1}\ln n))} , where n {\displaystyle n} is the number of data points, and d {\displaystyle d} is the dimension of each point. If the training set is linearly separable, then the perceptron is guaranteed to converge after making finitely many mistakes. The theorem is proved by Rosenblatt et al. Perceptron convergence theorem—Given a dataset D {\textstyle D} , such that max ( x , y ) ∈ D ‖ x ‖ 2 = R {\textstyle \max _{(x,y)\in D}\|x\|_{2}=R} , and it is linearly separable by some unit vector w ∗ {\textstyle w^{*}} , with margin γ {\textstyle \gamma } : γ := min ( x , y ) ∈ D y ( w ∗ ⋅ x ) {\displaystyle \gamma :=\min _{(x,y)\in D}y(w^{*}\cdot x)} Then the perceptron 0-1 learning algorithm converges after making at most ( R / γ ) 2 {\textstyle (R/\gamma )^{2}} mistakes, for any learning rate, and any method of sampling from the dataset. The following simple proof is due to Novikoff (1962). The idea of the proof is that the weight vector is always adjusted by a bounded amount in a direction with which it has a negative dot product, and thus can be bounded above by O(√t), where t is the number of changes to the weight vector. However, it can also be bounded below by O(t) because if there exists an (unknown) satisfactory weight vector, then every change makes progress in this (unknown) direction by a positive amount that depends only on the input vector. Suppose at step t {\textstyle t} , the perceptron with weight w t {\textstyle w_{t}} makes a mistake on data point ( x , y ) {\textstyle (x,y)} , then it updates to w t + 1 = w t + r ( y − f w t ( x ) ) x {\textstyle w_{t+1}=w_{t}+r(y-f_{w_{t}}(x))x} . If y = 0 {\textstyle y=0} , the argument is symmetric, so we omit it. WLOG, y = 1 {\textstyle y=1} , then f w t ( x ) = 0 {\textstyle f_{w_{t}}(x)=0} , f w ∗ ( x ) = 1 {\textstyle f_{w^{*}}(x)=1} , and w t + 1 = w t + r x {\textstyle w_{t+1}=w_{t}+rx} . By assumption, we have separation with margins: w ∗ ⋅ x ≥ γ {\displaystyle w^{*}\cdot x\geq \gamma } Thus, w ∗ ⋅ w t + 1 − w ∗ ⋅ w t = w ∗ ⋅ ( r x ) ≥ r γ {\displaystyle w^{*}\cdot w_{t+1}-w^{*}\cdot w_{t}=w^{*}\cdot (rx)\geq r\gamma } Also ‖ w t + 1 ‖ 2 2 − ‖ w t ‖ 2 2 = ‖ w t + r x ‖ 2 2 − ‖ w t ‖ 2 2 = 2 r ( w t ⋅ x ) + r 2 ‖ x ‖ 2 2 {\displaystyle \|w_{t+1}\|_{2}^{2}-\|w_{t}\|_{2}^{2}=\|w_{t}+rx\|_{2}^{2}-\|w_{t}\|_{2}^{2}=2r(w_{t}\cdot x)+r^{2}\|x\|_{2}^{2}} and since the perceptron made a mistake, w t ⋅ x ≤ 0 {\textstyle w_{t}\cdot x\leq 0} , and so ‖ w t + 1 ‖ 2 2 − ‖ w t ‖ 2 2 ≤ r 2 ‖ x ‖ 2 2 ≤ r 2 R 2 {\displaystyle \|w_{t+1}\|_{2}^{2}-\|w_{t}\|_{2}^{2}\leq r^{2}\|x\|_{2}^{2}\leq r^{2}R^{2}} Since we started with w 0 = 0 {\textstyle w_{0}=0} , after making N {\textstyle N} mistakes, ‖ w ‖ 2 ≤ N r 2 R 2 {\displaystyle \|w\|_{2}\leq {\sqrt {Nr^{2}R^{2}}}} but also ‖ w ‖ 2 ≥ w ⋅ w ∗ ≥ N r γ {\displaystyle \|w\|_{2}\geq w\cdot w^{*}\geq Nr\gamma } Combining the two, we have N ≤ ( R / γ ) 2 {\textstyle N\leq (R/\gamma )^{2}} While the perceptron algorithm is guaranteed to converge on some solution in the case of a linearly separable training set, it may still pick any solution and problems may admit many solutions of varying quality. The perceptron of optimal stability, nowadays better known as the linear support-vector machine, was designed to solve this problem (Krauth and Mezard, 1987). When the dataset is not linearly separable, then there is no way for a single perceptron to converge. However, we still have Perceptron cycling theorem—If the dataset D {\displaystyle D} has only finitely many points, then there exists an upper bound number M {\displaystyle M} , such that for any starting weight vector w 0 {\displaystyle w_{0}} all weight vector w t {\displaystyle w_{t}} has norm bounded by ‖ w t ‖ ≤ ‖ w 0 ‖ + M {\displaystyle \|w_{t}\|\leq \|w_{0}\|+M} This is proved first by Bradley Efron. Consider a dataset where the x {\displaystyle x} are from { − 1 , + 1 } n {\displaystyle \{-1,+1\}^{n}} , that is, the vertices of an n-dimensional hypercube centered at origin, and y = θ ( x i ) {\displaystyle y=\theta (x_{i})} . That is, all data points with positive x i {\displaystyle x_{i}} have y = 1 {\displaystyle y=1} , and vice versa. By the perceptron convergence theorem, a perceptron would converge after making at most n {\displaystyle n} mistakes. If we were to write a logical program to perform the same task, each positive example shows that one of the coordinates is the right one, and each negative example shows that its complement is a positive example. By collecting all the known positive examples, we eventually eliminate all but one coordinate, at which point the dataset is learned. This bound is asymptotically tight in terms of the worst-case. In the worst-case, the first presented example is entirely new, and gives n {\displaystyle n} bits of information, but each subsequent example would differ minimally from previous examples, and gives 1 bit each. After n + 1 {\displaystyle n+1} examples, there are 2 n {\displaystyle 2n} bits of information, which is sufficient for the perceptron (with 2 n {\displaystyle 2n} bits of information). However, it is not tight in terms of expectation if the examples are presented uniformly at random, since the first would give n {\displaystyle n} bits, the second n / 2 {\displaystyle n/2} bits, and so on, taking O ( ln ⁡ n ) {\displaystyle O(\ln n)} examples in total. Variants The pocket algorithm with ratchet (Gallant, 1990) solves the stability problem of perceptron learning by keeping the best solution seen so far "in its pocket". The pocket algorithm then returns the solution in the pocket, rather than the last solution. It can be used also for non-separable data sets, where the aim is to find a perceptron with a small number of misclassifications. However, these solutions appear purely stochastically and hence the pocket algorithm neither approaches them gradually in the course of learning, nor are they guaranteed to show up within a given number of learning steps. The Maxover algorithm (Wendemuth, 1995) is "robust" in the sense that it will converge regardless of (prior) knowledge of linear separability of the data set. In the linearly separable case, it will solve the training problem – if desired, even with optimal stability (maximum margin between the classes). For non-separable data sets, it will return a solution with a computable small number of misclassifications. In all cases, the algorithm gradually approaches the solution in the course of learning, without memorizing previous states and without stochastic jumps. Convergence is to global optimality for separable data sets and to local optimality for non-separable data sets. The Voted Perceptron (Freund and Schapire, 1999), is a variant using multiple weighted perceptrons. The algorithm starts a new perceptron every time an example is wrongly classified, initializing the weights vector with the final weights of the last perceptron. Each perceptron will also be given another weight corresponding to how many examples do they correctly classify before wrongly classifying one, and at the end the output will be a weighted vote on all perceptrons. In separable problems, perceptron training can also aim at finding the largest separating margin between the classes. The so-called perceptron of optimal stability can be determined by means of iterative training and optimization schemes, such as the Min-Over algorithm (Krauth and Mezard, 1987) or the AdaTron (Anlauf and Biehl, 1989)). AdaTron uses the fact that the corresponding quadratic optimization problem is convex. The perceptron of optimal stability, together with the kernel trick, are the conceptual foundations of the support-vector machine. The α {\displaystyle \alpha } -perceptron further used a pre-processing layer of fixed random weights, with thresholded output units. This enabled the perceptron to classify analogue patterns, by projecting them into a binary space. In fact, for a projection space of sufficiently high dimension, patterns can become linearly separable. Another way to solve nonlinear problems without using multiple layers is to use higher-order networks (sigma-pi unit). In this type of network, each element in the input vector is extended with each pairwise combination of multiplied inputs (second order). This can be extended to an n-order network. A generalization of the perceptron model is the Receptron that incorporates non-linear interactions between inputs. A single receptron is able to classify non-linear Boolean functions. It should be kept in mind, however, that the best classifier is not necessarily that which classifies all the training data perfectly. Indeed, if we had the prior constraint that the data come from equi-variant Gaussian distributions, the linear separation in the input space is optimal, and the nonlinear solution is overfitted. Other linear classification algorithms include Winnow, support-vector machine, and logistic regression. Like most other techniques for training linear classifiers, the perceptron generalizes naturally to multiclass classification. Here, the input x {\displaystyle x} and the output y {\displaystyle y} are drawn from arbitrary sets. A feature representation function f ( x , y ) {\displaystyle f(x,y)} maps each possible input/output pair to a finite-dimensional real-valued feature vector. As before, the feature vector is multiplied by a weight vector w {\displaystyle w} , but now the resulting score is used to choose among many possible outputs: Learning again iterates over the examples, predicting an output for each, leaving the weights unchanged when the predicted output matches the target, and changing them when it does not. The update becomes: This multiclass feedback formulation reduces to the original perceptron when x {\displaystyle x} is a real-valued vector, y {\displaystyle y} is chosen from { 0 , 1 } {\displaystyle \{0,1\}} , and f ( x , y ) = y x {\displaystyle f(x,y)=yx} . For certain problems, input/output representations and features can be chosen so that a r g m a x y f ( x , y ) ⋅ w {\displaystyle \mathrm {argmax} _{y}f(x,y)\cdot w} can be found efficiently even though y {\displaystyle y} is chosen from a very large or even infinite set. Since 2002, perceptron training has become popular in the field of natural language processing for such tasks as part-of-speech tagging and syntactic parsing (Collins, 2002). It has also been applied to large-scale machine learning problems in a distributed computing setting. References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Artificial_neural_network] | [TOKENS: 9727]
Contents Neural network (machine learning) In machine learning, a neural network (NN) or neural net, also called an artificial neural network (ANN), is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected units or nodes called artificial neurons, which loosely model the neurons in the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected by edges, which model the synapses in the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the totality of its inputs, called the activation function. The strength of the signal at each connection is determined by a weight, which adjusts during the learning process. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers. Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information. Training Neural networks are typically trained through empirical risk minimization, which is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset. Gradient-based methods such as backpropagation are usually used to estimate the parameters of the network. During the training phase, ANNs learn from labeled training data by iteratively updating their parameters to minimize a defined loss function. This method allows the network to generalize to unseen data. History Today's deep neural networks are based on early work in statistics over 200 years ago. The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement. Historically, digital computers such as the von Neumann model operate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework of connectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing. Warren McCulloch and Walter Pitts (1943) considered a non-learning computational model for neural networks. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence. In the late 1940s, D. O. Hebb proposed a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. It was used in many early neural networks, such as Rosenblatt's perceptron and the Hopfield network. Farley and Clark (1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956). In 1958, psychologist Frank Rosenblatt described the perceptron, one of the first implemented artificial neural networks, funded by the United States Office of Naval Research. R. D. Joseph (1960) mentions an even earlier perceptron-like device by Farley and Clark: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject." The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence. The first perceptrons did not have adaptive hidden units. However, Joseph (1960) also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962): section 16 cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning. Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in the Soviet Union (1965). They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates." The first deep learning multilayer perceptron (MLP) trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique. In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning. Nevertheless, research stagnated in the United States following the work of Minsky and Papert (1969), who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967). In 1976 transfer learning was introduced in neural networks learning. Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers and weight replication began with the neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation. Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. In 1970, Seppo Linnainmaa published the modern form of backpropagation in his Master's thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work. Kunihiko Fukushima's convolutional neural network (CNN) architecture of 1979 also introduced max pooling, a popular downsampling procedure for CNNs. CNNs have become an essential tool for computer vision. The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition. In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images. From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments. One origin of RNN was statistical mechanics. In 1972, Shun'ichi Amari proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component of learning. This was popularized as the Hopfield network by John Hopfield (1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943) considered neural networks that contain cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past. In 1982 a recurrent neural network with an array architecture (rather than a multilayer perceptron architecture), namely a Crossbar Adaptive Array, used direct recurrent connections from the output to the supervisor (teaching) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks. In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on the relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion. In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation. It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology. Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology. In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991, Jürgen Schmidhuber proposed the "neural sequence chunker" or "neural history compressor" which introduced the important concepts of self-supervised pre-training (the "P" in ChatGPT) and neural knowledge distillation. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. In 1991, Sepp Hochreiter's diploma thesis identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. He and Schmidhuber introduced long short-term memory (LSTM), which set accuracy records in multiple applications domains. This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999. It became the default choice for RNN architecture. During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models. Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly. In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3. In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning". Radial basis function and wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied in nonlinear system identification and classification applications. Generative adversarial network (GAN) (Ian Goodfellow et al., 2014) became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022). In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the highway network was published in May 2015, and the residual neural network (ResNet) in December 2015. ResNet behaves like an open-gated Highway Net. During the 2010s, the seq2seq model was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 in Attention Is All You Need. It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992) scales linearly and was later shown to be equivalent to the unnormalized linear Transformer. Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use this architecture. Models ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms a directed, weighted graph. An artificial neural network consists of simulated neurons. Each neuron is connected to other nodes via links like a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another, allowing weights to choose the signal between neurons. ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final output neurons of the neural net accomplish the task, such as recognizing an object in an image.[citation needed] To find the output of the neuron we take the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. We add a bias term to this sum. This weighted sum is sometimes called the activation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image. The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can be pooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as feedforward networks. Alternatively, networks that allow connections between neurons in the same or previous layers are known as recurrent networks. A hyperparameter is a constant parameter defining any configurable part of the learning process, whose value is set prior to training. Examples of hyperparameters include learning rate, batch size and regularization parameters.. The performance of a neural network is strongly influenced by the choice of hyperparameter values, and thus the hyperparameters are often optimized as part of the training process, a process called hyperparameter tuning or hyperparameter optimization. Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining a cost function that is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as a statistic whose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation. The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such as Quickprop are primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.[citation needed] While it is possible to define a cost function ad hoc, frequently the choice is determined by the function's desirable properties (such as convexity) because it arises from the model (e.g. in a probabilistic model, the model's posterior probability can be used as an inverse cost).[citation needed] Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backpropagation calculates the gradient (the derivative) of the cost function associated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such as extreme learning machines, "no-prop" networks, training without backtracking, "weightless" networks, and non-connectionist neural networks.[citation needed] Machine learning is commonly separated into three main learning paradigms, supervised learning, unsupervised learning and reinforcement learning. Each corresponds to a particular learning task. Supervised learning uses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far. In unsupervised learning, input data is given along with the cost function, some function of the data x {\displaystyle \textstyle x} and the network's output. The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model f ( x ) = a {\displaystyle \textstyle f(x)=a} where a {\displaystyle \textstyle a} is a constant and the cost C = E [ ( x − f ( x ) ) 2 ] {\displaystyle \textstyle C=E[(x-f(x))^{2}]} . Minimizing this cost produces a value of a {\displaystyle \textstyle a} that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between x {\displaystyle \textstyle x} and f ( x ) {\displaystyle \textstyle f(x)} , whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering. In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. In reinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and an instantaneous cost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly. Formally, the environment is modeled as a Markov decision process (MDP) with states s 1 , . . . , s n ∈ S {\displaystyle \textstyle {s_{1},...,s_{n}}\in S} and actions a 1 , . . . , a m ∈ A {\displaystyle \textstyle {a_{1},...,a_{m}}\in A} . Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution P ( c t | s t ) {\displaystyle \textstyle P(c_{t}|s_{t})} , the observation distribution P ( x t | s t ) {\displaystyle \textstyle P(x_{t}|s_{t})} and the transition distribution P ( s t + 1 | s t , a t ) {\displaystyle \textstyle P(s_{t+1}|s_{t},a_{t})} , while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC. ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks. Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion. Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation: The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it receives initial emotions (only once) about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations. Neuroevolution can create neural network topologies and weights using evolutionary computation. It is competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends". Stochastic neural networks originating from Sherrington–Kirkpatrick models are a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neurons stochastic transfer functions [citation needed], or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help the network escape from local minima. Stochastic neural networks trained using a Bayesian approach are known as Bayesian neural networks. Topological deep learning, first introduced in 2017, is an emerging approach in machine learning that integrates topology with deep neural networks to address highly intricate and high-order data. Initially rooted in algebraic topology, TDL has since evolved into a versatile framework incorporating tools from other mathematical disciplines, such as differential topology and geometric topology. As a successful example of mathematical deep learning, TDL continues to inspire advancements in mathematical artificial intelligence, fostering a mutually beneficial relationship between AI and mathematics. In a Bayesian framework, a distribution over the set of allowed models is chosen to minimize the cost. Evolutionary methods, gene expression programming, simulated annealing, expectation–maximization, non-parametric methods and particle swarm optimization are other learning algorithms. Convergent recursion is a learning algorithm for cerebellar model articulation controller (CMAC) neural networks. Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning, weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set. Types ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers. Some of the main breakthroughs include: Network design Using artificial neural networks requires an understanding of their characteristics. Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network. Available systems include AutoML and AutoKeras. scikit-learn library provides functions to help with building a deep network from scratch. We can then implement a deep network with TensorFlow or Keras. Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc. The Python code snippet provides an overview of the training function, which uses the training dataset, number of hidden layer units, learning rate, and number of iterations as parameters:[citation needed] Monitoring and concept drift detection of ANNs When neural networks are deployed in real-world applications, the statistical properties of the input data may change over time, a phenomenon known as concept drift or non-stationarity. Drift can reduce predictive accuracy and lead to unreliable or biased decisions if it is not detected and corrected. In practice, this means that the model's accuracy in deployment may differ substantially from the levels observed during training or cross-validation. Several strategies have been developed to monitor neural networks for drift and degradation: Applications Because of their ability to model and reproduce nonlinear processes, artificial neural networks have found applications in many disciplines. These include: ANNs have been used to diagnose several types of cancers and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information. ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions. ANNs have been proposed as a tool to solve partial differential equations in physics and simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level. It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition. Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation. Theoretical properties The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters. A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.[failed verification] A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity. Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity. Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical. Another issue worthy to mention is that training may cross some saddle point which may lead the convergence to the wrong direction. The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models. Another example is when parameters are small, it is observed that ANNs often fit target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions. Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters. Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error. The second is to use some form of regularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting. Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified. By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications. The softmax activation function is: Criticism A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation. Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm for CMAC. Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right). A central claim[citation needed] of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997, Alexander Dewdney, a former Scientific American columnist, commented that as a result, artificial neural networks have a something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything. One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud to mastering the game of Go. Technology writer Roger Bridgman commented: Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource". In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having. Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture. Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies. Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons – which require enormous CPU power and time.[citation needed] Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days. Neuromorphic engineering or a physical neural network addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU. Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture. Advocates of hybrid models (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind. Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases. These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute. This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications like facial recognition, hiring processes, and law enforcement. For example, in 2018, Amazon had to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field. The program would penalize any resume with the word "woman" or the name of any women's college. However, the use of synthetic data can help reduce dataset bias and increase representation in datasets. Gallery Recent advancements and future directions Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine.[citation needed] In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance. This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging. By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques. These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products.[citation needed] In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content. This has implications for automated customer service, content moderation, and language understanding technologies. In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications. ANNs are used for stock market prediction and credit scoring: ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancing risk management strategies.[citation needed] ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complex medical imaging for early disease detection, and by predicting patient outcomes for personalized treatment planning. In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs. Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management. Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.[citation needed] ANNs such as generative adversarial networks (GAN) and transformers are used for content creation across numerous industries. This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance, DALL-E is a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user. In the field of music, transformers are used to create original music for commercials and documentaries through companies such as AIVA and Jukedeck. In the marketing industry, generative models are used to create personalized advertisements for consumers. Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020. Furthermore, neural networks have found uses in video game creation, where non-player characters (NPCs) can make decisions based on all the characters currently in the game. See also References Bibliography External links
========================================