Is your 2-year-old smarter than your computer? Yes, simply because she is human. While she might not know how to read or do simple addition, she has the uniquely human ability to rationalize. She can take a small piece of information and abstract it. That is what makes her smarter than a computer. And that is what we need to train computers to do. This was one of the big topics of discussion on the first day of the 6th Heidelberg Laureate Forum.
John E. Hopcroft, from Cornell University, presented a talked called “An Introduction to AI and Deep Learning.” Hopcroft started with a simple example. He recalls reading a Richard Scarry book to his two-year-old daughter pointing out simple cartoon images such as a house and a car. Later on, on a walk with his daughter, she pointed out a fire truck in their neighborhood. Hopcroft was surprised that she was able to discern a real-life fire truck from a cartoon image that she had seen in the Richard Scarry book. What his daughter had done was learn from a cartoon image and apply that image in real life in order to identify an object. That, it turns out, is uniquely human.
How you might ask, can we do that with computers?
Well, as Hopcroft described, it is very challenging. “That is something we need to figure out how to train networks to do. A minor change to an image can change its classification to any arbitrary classification. So we need to be careful.” There is a relationship between deep learning and how the brain works. “The earlier you do something the bigger the payoff,” as Hopcroft stated. The first two years of a child’s life is critical. “When we are born our brains are fully connected and as we mature we edit away our experiences. The real world either stimulates or doesn’t stimulate our neural paths,” said Vinton Cerf referring to Hopcroft’s talk. “The problem is that we do not have a very good understanding of that. We also seem to be built to forget things. This notion of forgetting seems to be very important, our brains are much more complicated than anything we have ever built.” We don’t have algorithms in the world that can generalize one or two objects and identify it as a fire truck. But, as humans, we can take models in the world and make generalized abstractions. “Machines don’t do that very well. We don’t have generalized artificial intelligence,” said Cerf.
This is just one challenge of AI that we have yet to figure out. Vinton Cerf’s concluding comment about Hopcroft’s talk said it all, “Anyone who has made any prediction about AI has been wrong. I don’t think we even understand the definition of intelligence since the definition of intelligence has changed in the past.”
Watch Hopcroft’s talk online here and see the HLF website for all the live streamed talks this week.
The Computing Community Consortium (CCC) recently announced a new initiative to create a Roadmap for Artificial Intelligence, led by Yolanda Gil (University of Southern California and President-Elect of AAAI) and Bart Selman (Cornell University). The plan is to hold a series of workshops in the Fall/Winter of 2018/2019, which will result in a Roadmap to be produced in the Spring of 2019. The goal of the initiative is to identify challenges, opportunities, and pitfalls, and create a compelling report that will effectively inform future federal priorities.