AI has been the hottest for more than 60 years, an

  • Detail

"Ai" has been around for more than 60 years, and how far have we come

"artificial intelligence" has been around for more than 60 years, and how far have we come

20:21 source://

original title: "artificial intelligence" has been more than 60 years, and how far have we come

Lei Feng AI technology review: these days, the daily topics in the machine learning circle suddenly blow a wind of "nostalgia"

in the residual temperature of Gans, the solutions based on deep learning design are still struggling with problems outside the field of information and computer. Compared with a single method and task, machine learning theory research pays more attention to the large picture of the relationship between different methods and tasks, strengthening learning and constantly achieving new achievements under the support of unprecedented computing power On the one hand, this summer of 2018, which is still widely criticized for the inherent instability and low sample efficiency of the learning paradigm, suddenly someone remembered the summer of 1955, more than 60 years ago

"the birth of artificial intelligence"

in 1956, five participants in Dartmouth College Summer artificial intelligence research project, in July 2006 ai@50 Reunion on the forum. From the left: trenchard more, John McCarthy, Marvin Minsky, Oliver Selfridge and ray solomonoff

in the summer of 1955, several masters of computer science and informatics and AI pioneers John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Elwood Shannon launched a summer scientific research project proposal in Dartmouth, putting forward the concept of "artificial intelligence" for the first time. In the proposal of the workshop, they wrote, "we will try to find a way to let machines use language, form abstract concepts and concepts, help humans solve different kinds of problems, and be able to improve themselves... The goal of artificial intelligence research at this stage is to try to make machines do behaviors that can be called" intelligence "by humans." That summer, the topics they hope to discuss are (after translating into the vocabulary used by modern researchers): self programming computer, natural language, neural network, computational complexity, self-improvement, representation (ontology), randomness and creativity

a proposal for the Dartmouth summer research project on artistic intelligence, 1955, J. McCarthy,

they also wrote, "we believe that as long as a group of carefully selected excellent scientists work together for a summer, we can make an important breakthrough on at least one of these issues.". Accordingly, they prepared the salaries of 6 professors and 2 doctoral students in the budget, which, together with other expenses and unexpected reserved expenses, amounted to $13500; I feel shabby just looking at the amount of the budget (even if the dollar was worth more than it is now)

in the end, we naturally know that the breadth and depth of these problems are far beyond what "a small group of scientists" and "a summer" can grasp. Even today, many of them are hot unsolved problems in the field of artificial intelligence/machine learning. Even if these seven (modern versions) are the discussion topics seen in a small-scale machine learning conference, it won't be particularly strange

on the contrary, there are still unsolved research topics proposed by workshop more than 60 years ago, which are still under heated discussion and research, and people can't help but raise their mouths. Of course, this shows the foresight of several pioneers, but does it also show that there are too few innovative and essential studies in today's papers? We may give a better answer to some questions, whose common feature is the adoption of new materials, new processes and new technologies, but we have not yet raised better questions in many aspects

after the AI conference in Dartmouth was officially held in 1956 and created the word artificial intelligence, countless computer scientists, electronic scientists, linguists, neuroscientists, psychologists and so on gathered under this banner to try to promote the research on intelligent systems, computing theory, biological intelligence, and the design of human like intelligent systems, but as you can see, Too many problems and concepts are packed in the big basket of "artificial intelligence", and ordinary people have also formed the bad habit of evaluating technological achievements with "whether machines are like people" and "who is powerful between machines and people"

some scientists who do machine learning and intelligent systems actually feel very headache: if the word proposed by several people in Dartmouth was "computational intelligence" rather than "artificial intelligence", it could show that the reproduction of human intelligence is not the only goal, or even the most important goal...

from this point of view, Perhaps it can be explained that several pioneers' understanding of this direction is too simple and optimistic, and they do not realize the far-reaching gap between "biological intelligence" and "machine intelligence", "complex human social culture" and "deterministic and resolvable machine computing", "imitating biological intelligence" and "intelligent systems that help humans complete tasks". Later, Michael I. Jordan, the "grandmaster" of machine learning and "Michael Jordan of AI", also wrote articles to explain related topics carefully. Interested readers can read previous articles of Lei Feng AI technology review, Michael I. Jordan wrote: don't be blinded by deep learning

the following two summer stories

"artificial intelligence" concept was proposed in 1955, and then gradually developed a classic rule-based machine learning system; Of course, this is before the era of deep learning

if we look back at the research topic of "artificial intelligence" in 1955, what would it feel like to look back at the machine learning system in the 1980s? Pedro Domingos, a professor of computer science and engineering at the bottom of the Washington axis, also gave his own view:

it seems that deep learning is just another error recurrence of the previous machine learning methods. However, many people refute that although they agree that the intelligent systems of the two eras face very narrow tasks and are very fragile, today's systems have indeed been widely used in various systems, providing real value to people all over the world

fra (3), the author of keras, also said that the development goal n ç OIS cullet: "Just as people tend to overestimate the coverage and intelligence level of contemporary AI systems, they will also underestimate how much these simple, narrow task oriented systems can do, as long as they are expanded to the appropriate scale and widely deployed. Deep learning does look stupid from all angles, and it does not constitute a meaningful path to artificial intelligence, but at the same time, deep learning is very It has the potential to have a hard to ignore impact on most industries. It doesn't need to be smart to be useful. "

fran ç OIS cullet's words are very pertinent. Deep learning is far from the goal of "artificial intelligence", but solutions centered on deep learning have taken root in many fields. In fact, some people are worried about whether the third cold winter of neural network and artificial intelligence is coming this year. However, since various machine learning methods, including neural network model, have been widely used by enterprises in addition to extensive research in academia, there is no need to be afraid of the so-called cold winter

this summer, it's very hot. And the following summer will be full of stories like the previous summer

"a is the proposal for the Dartmouth summer research project on artificial intelligence", pdf download:

Lei Feng AI technology review report

Copyright © 2011 JIN SHI