text
stringlengths
0
3.86k
Among the critics of McCarthy 's approach were his colleagues across the country at MIT . Marvin Minsky , Seymour Papert and Roger Schank were trying to solve problems like " story understanding " and " object recognition " that required a machine to think like a person . In order to use ordinary concepts like " chair " or " restaurant " they had to make all the same illogical assumptions that people normally made . Unfortunately , imprecise concepts like these are hard to represent in logic . Gerald Sussman observed that " using precise language to describe essentially imprecise concepts doesn 't make them any more precise . " Schank described their " anti @-@ logic " approaches as " scruffy " , as opposed to the " neat " paradigms used by McCarthy , Kowalski , Feigenbaum , Newell and Simon .
In 1975 , in a seminal paper , Minsky noted that many of his fellow " scruffy " researchers were using the same kind of tool : a framework that captures all our common sense assumptions about something . For example , if we use the concept of a bird , there is a constellation of facts that immediately come to mind : we might assume that it flies , eats worms and so on . We know these facts are not always true and that deductions using these facts will not be " logical " , but these structured sets of assumptions are part of the context of everything we say and think . He called these structures " frames " . Schank used a version of frames he called " scripts " to successfully answer questions about short stories in English . Many years later object @-@ oriented programming would adopt the essential idea of " inheritance " from AI research on frames .
= = Boom 1980 – 1987 = =
In the 1980s a form of AI program called " expert systems " was adopted by corporations around the world and knowledge became the focus of mainstream AI research . In those same years , the Japanese government aggressively funded AI with its fifth generation computer project . Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart . Once again , AI had achieved success .
= = = The rise of expert systems = = =
An expert system is a program that answers questions or solves problems about a specific domain of knowledge , using logical rules that are derived from the knowledge of experts . The earliest examples were developed by Edward Feigenbaum and his students . Dendral , begun in 1965 , identified compounds from spectrometer readings . MYCIN , developed in 1972 , diagnosed infectious blood diseases . They demonstrated the feasibility of the approach .
Expert systems restricted themselves to a small domain of specific knowledge ( thus avoiding the commonsense knowledge problem ) and their simple design made it relatively easy for programs to be built and then modified once they were in place . All in all , the programs proved to be useful : something that AI had not been able to achieve up to this point .
In 1980 , an expert system called XCON was completed at CMU for the Digital Equipment Corporation . It was an enormous success : it was saving the company 40 million dollars annually by 1986 . Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI , most of it to in @-@ house AI departments . An industry grew up to support them , including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion .
= = = The knowledge revolution = = =
The power of expert systems came from the expert knowledge they contained . They were part of a new direction in AI research that had been gaining ground throughout the 70s . " AI researchers were beginning to suspect — reluctantly , for it violated the scientific canon of parsimony — that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways , " writes Pamela McCorduck . " [ T ] he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge , sometimes quite detailed knowledge , of a domain where a given task lay " . Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s .
The 1980s also saw the birth of Cyc , the first attempt to attack the commonsense knowledge problem directly , by creating a massive database that would contain all the mundane facts that the average person knows . Douglas Lenat , who started and led the project , argued that there is no shortcut ― the only way for machines to know the meaning of human concepts is to teach them , one concept at a time , by hand . The project was not expected to be completed for many decades .
Chess playing programs HiTech and Deep Thought defeated chess masters in 1989 . Both were developed by Carnegie Mellon University ; Deep Thought development paved the way for the Deep Blue .
= = = The money returns : the fifth generation project = = =
In 1981 , the Japanese Ministry of International Trade and Industry set aside $ 850 million for the Fifth generation computer project . Their objectives were to write programs and build machines that could carry on conversations , translate languages , interpret pictures , and reason like human beings . Much to the chagrin of scruffies , they chose Prolog as the primary computer language for the project .
Other countries responded with new programs of their own . The UK began the ₤ 350 million Alvey project . A consortium of American companies formed the Microelectronics and Computer Technology Corporation ( or " MCC " ) to fund large scale projects in AI and information technology . DARPA responded as well , founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988 .
= = = The revival of connectionism = = =
In 1982 , physicist John Hopfield was able to prove that a form of neural network ( now called a " Hopfield net " ) could learn and process information in a completely new way . Around the same time , David Rumelhart popularized a new method for training neural networks called " backpropagation " ( discovered years earlier by Paul Werbos ) . These two discoveries revived the field of connectionism which had been largely abandoned since 1970 .
The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986 — a two volume collection of papers edited by Rumelhart and psychologist James McClelland . Neural networks would become commercially successful in the 1990s , when they began to be used as the engines driving programs like optical character recognition and speech recognition .
= = Bust : the second AI winter 1987 – 1993 = =
The business community 's fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble . The collapse was in the perception of AI by government agencies and investors – the field continued to make advances despite the criticism . Rodney Brooks and Hans Moravec , researchers from the related field of robotics , argued for an entirely new approach to artificial intelligence .
= = = AI winter = = =
The term " AI winter " was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow . Their fears were well founded : in the late 80s and early 90s , AI suffered a series of financial setbacks .
The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987 . Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others . There was no longer a good reason to buy them . An entire industry worth half a billion dollars was demolished overnight .
Eventually the earliest successful expert systems , such as XCON , proved too expensive to maintain . They were difficult to update , they could not learn , they were " brittle " ( i.e. , they could make grotesque mistakes when given unusual inputs ) , and they fell prey to problems ( such as the qualification problem ) that had been identified years earlier . Expert systems proved useful , but only in a few special contexts .
In the late 80s , the Strategic Computing Initiative cut funding to AI " deeply and brutally . " New leadership at DARPA had decided that AI was not " the next wave " and directed funds towards projects that seemed more likely to produce immediate results .
By 1991 , the impressive list of goals penned in 1981 for Japan 's Fifth Generation Project had not been met . Indeed , some of them , like " carry on a casual conversation " had not been met by 2010 . As with other AI projects , expectations had run much higher than what was actually possible .
= = = The importance of having a body : Nouvelle AI and embodied reason = = =
In the late 80s , several researchers advocated a completely new approach to artificial intelligence , based on robotics . They believed that , to show real intelligence , a machine needs to have a body — it needs to perceive , move , survive and deal with the world . They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill ( see Moravec 's paradox ) . They advocated building intelligence " from the bottom up . "
The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties . Another precursor was David Marr , who had come to MIT in the late 70s from a successful background in theoretical neuroscience to lead the group studying vision . He rejected all symbolic approaches ( both McCarthy 's logic and Minsky 's frames ) , arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place . ( Marr 's work would be cut short by leukemia in 1980 . )
In a 1990 paper , " Elephants Don 't Play Chess , " robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis , arguing that symbols are not always necessary since " the world is its own best model . It is always exactly up to date . It always has every detail there is to be known . The trick is to sense it appropriately and often enough . " In the 80s and 90s , many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning , a theory called the embodied mind thesis .
= = AI 1993 – present = =
The field of AI , now more than a half a century old , finally achieved some of its oldest goals . It began to be used successfully throughout the technology industry , although somewhat behind the scenes . Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability . Still , the reputation of AI , in the business world at least , was less than pristine . Inside the field there was little agreement on the reasons for AI 's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s . Together , all these factors helped to fragment AI into competing subfields focused on particular problems or approaches , sometimes even under new names that disguised the tarnished pedigree of " artificial intelligence " . AI was both more cautious and more successful than it had ever been .
= = = Milestones and Moore 's Law = = =
On 11 May 1997 , Deep Blue became the first computer chess @-@ playing system to beat a reigning world chess champion , Garry Kasparov . The super computer was a specialized version of a framework produced by IBM , and was capable of processing twice as many moves per second as it had during the first match ( which Deep Blue had lost ) , reportedly 200 @,@ 000 @,@ 000 moves per second . The event was broadcast live over the internet and received over 74 million hits .
In 2005 , a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail . Two years later , a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws . In February 2011 , in a Jeopardy ! quiz show exhibition match , IBM 's question answering system , Watson , defeated the two greatest Jeopardy ! champions , Brad Rutter and Ken Jennings , by a significant margin .
These successes were not due to some revolutionary new paradigm , but mostly on the tedious application of engineering skill and on the tremendous power of computers today . In fact , Deep Blue 's computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951 . This dramatic increase is measured by Moore 's law , which predicts that the speed and memory capacity of computers doubles every two years . The fundamental problem of " raw computer power " was slowly being overcome .
= = = Intelligent agents = = =
A new paradigm called " intelligent agents " became widely accepted during the 90s . Although earlier researchers had proposed modular " divide and conquer " approaches to AI , the intelligent agent did not reach its modern form until Judea Pearl , Allen Newell and others brought concepts from decision theory and economics into the study of AI . When the economist 's definition of a rational agent was married to computer science 's definition of an object or module , the intelligent agent paradigm was complete .
An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success . By this definition , simple programs that solve specific problems are " intelligent agents " , as are human beings and organizations of human beings , such as firms . The intelligent agent paradigm defines AI research as " the study of intelligent agents " . This is a generalization of some earlier definitions of AI : it goes beyond studying human intelligence ; it studies all kinds of intelligence .
The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful . It provided a common language to describe problems and share their solutions with each other , and with other fields that also used concepts of abstract agents , like economics and control theory . It was hoped that a complete agent architecture ( like Newell 's SOAR ) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents .
= = = " Victory of the neats " = = =
AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past . There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics , economics or operations research . The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable ; AI had become a more rigorous " scientific " discipline . Russell & Norvig ( 2003 ) describe this as nothing less than a " revolution " and " the victory of the neats " .
Judea Pearl 's highly influential 1988 book brought probability and decision theory into AI . Among the many new tools in use were Bayesian networks , hidden Markov models , information theory , stochastic modeling and classical optimization . Precise mathematical descriptions were also developed for " computational intelligence " paradigms like neural networks and evolutionary algorithms .
= = = AI behind the scenes = = =
Algorithms originally developed by AI researchers began to appear as parts of larger systems . AI had solved a lot of very difficult problems and their solutions proved to be useful throughout the technology industry , such as data mining , industrial robotics , logistics , speech recognition , banking software , medical diagnosis and Google 's search engine .
The field of AI receives little or no credit for these successes . Many of AI 's greatest innovations have been reduced to the status of just another item in the tool chest of computer science . Nick Bostrom explains " A lot of cutting edge AI has filtered into general applications , often without being called AI because once something becomes useful enough and common enough it 's not labeled AI anymore . "
Many researchers in AI in 1990s deliberately called their work by other names , such as informatics , knowledge @-@ based systems , cognitive systems or computational intelligence . In part , this may be because they considered their field to be fundamentally different from AI , but also the new names help to procure funding . In the commercial world at least , the failed promises of the AI Winter continue to haunt AI research , as the New York Times reported in 2005 : " Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild @-@ eyed dreamers . "
= = = Where is HAL 9000 ? = = =
In 1968 , Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001 , a machine would exist with an intelligence that matched or exceeded the capability of human beings . The character they created , HAL 9000 , was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001 .
Marvin Minsky asks " So the question is why didn 't we get HAL in 2001 ? " Minsky believes that the answer is that the central problems , like commonsense reasoning , were being neglected , while most researchers pursued things like commercial applications of neural nets or genetic algorithms . John McCarthy , on the other hand , still blames the qualification problem . For Ray Kurzweil , the issue is computer power and , using Moore 's Law , he predicts that machines with human @-@ level intelligence will appear by 2029 . Jeff Hawkins argues that neural net research ignores the essential properties of the human cortex , preferring simple models that have been successful at solving simple problems . There are many other explanations and for each there is a corresponding research program underway .
= Cyclone Graham =
Cyclone Graham of the 2002 – 03 Australian region cyclone season was a weak tropical storm that affected Australia during late February and early March 2003 . Graham originated from an area of convection that emerged onto water after sitting over Australia on 23 February . The interaction with a monsoon trough formed an area of low pressure that developed into Tropical Cyclone Graham on 27 February . The storm moved slowly to the east @-@ southeast , and after turning to the south it peaked as a tropical storm and made landfall on Western Australia the next day . The cyclone weakened as it moved inland , and dissipated on 1 March . The storm dropped heavy rainfall and caused high winds , which produced flooding and downed trees . One fatality occurred , though no significant damages were reported .
= = Meteorological history = =
On 23 February 2003 , an area of convection that was situated over land for roughly a week emerged over open waters along the northern coast of Australia . The strengthening of a deep , persistent monsoon trough contributed to cyclogenesis , and a low pressure area formed . By 25 February , the low developed a banding feature in which the highest winds were located . Though the storm was located in an area of unfavorable wind shear , the Australian Bureau of Meteorology ( BoM ) began to issue gale warnings on the system at 0100 UTC the next day , while the low was located several hundred miles north @-@ northeast of Port Hedland . The disturbance was initially nearly stationary as it showed signs of organization due to relaxed shear , and at 0700 UTC on 27 February , the Joint Typhoon Warning Center ( JTWC ) designated the storm as Tropical Cyclone Graham , as it had attained 80 km / h ( 50 mph ) 10 @-@ minute maximum sustained winds . The first warning was issued on Graham later that day .
Initially exhibiting characteristics of a monsoonal low , a mid @-@ level ridge to the south of Graham caused strong westerly winds that moved the storm slowly east @-@ southeastward . However , a deep trough eroded the ridge , allowing the cyclone to move more towards the south . According to the JTWC , the storm had intensified late on 28 February , though at the same time the BoM noted the slight weakening of the storm . Graham reached its peak intensity that day while nearing the coast .
The storm made landfall at Western Australia 's Eighty Mile Beach at 1400 UTC on 28 February , and began to weaken . The storm had dissipated on 1 March ; the BoM issued their last advisory on the cyclone at 0400 UTC that day , while similarly , the JTWC issued their last advisory just two hours later . The storm 's remnants died out in the country 's desert .
= = Impact = =
In advance of the cyclone , the communities of Wallal , Sandfire , Punmu and Telfer were put on alert . A warning was issued for Bidyadanga , Pardoo and Cotton Creek . The storm 's landfall in Western Australia brought heavy rainfall and high winds . The storm dropped 163 mm ( 6 @.@ 4 in ) of rain at Telfer in one night , over half the town 's annual average ; total rainfall reached 175 mm ( 6 @.@ 9 in ) there . The heavy rain caused flooding and road closures , and swelled a river passing through Fitzroy Crossing , though the river only topped its banks slightly . Near that town , at Blue Bush Creek , while a group of people attempted to cross floodwaters , two men were swept away . Both men were rescued , though one died before emergency services arrived . In addition to the flooding , a number of trees were downed . No significant damages were reported .
Following the storm , the name Graham was retired from the Australian region basin .
= Languedoc @-@ Roussillon wine =
Languedoc @-@ Roussillon wine , including the vin de pays labeled Vin de Pays d 'Oc , is produced in southern France . While " Languedoc " can refer to a specific historic region of France and Northern Catalonia , usage since the 20th century ( especially in the context of wine ) has primarily referred to the northern part of the Languedoc @-@ Roussillon région of France , an area which spans the Mediterranean coastline from the French border with Spain to the region of Provence . The area has around 700 @,@ 000 acres ( 2 @,@ 800 km2 ) under vines and is the single biggest wine @-@ producing region in the world , being responsible for more than a third of France 's total wine production . In 2001 , the region produced more wine than the United States .