CCC Council member Maja Mataric from the University of Southern California provided contributions to this post.
If we train Artificial Intelligence (AI) to do our work for us it will still need to be periodically checked for errors and random noise. This detailed human oversight is not something we can skip. As AI has more and more power, it will also have more responsibility and the decisions it makes could be deadly if incorrect. We still have so much to learn about building machines that could potentially make life-altering decisions, and we cannot predict what kinds of serious engineering flaws will occur in the future.
Michael I. Jordan from the University of California, Berkeley, recently wrote an article for Medium called Artificial Intelligence- The Revolution Hasn’t Happened Yet. He points out that the current public understanding and dialog on AI can potentially blind us to “the challenges and opportunities that are presented by the full scope of AI, intelligence augmentation, and intelligent infrastructure.”
Jordan stresses that while industry will continue to drive the developments of these technologies, it is also critically important that academia play an “essential role.” Academia has the ability to bring together an interdisciplinary team of researchers who are not only in “computational and statistical disciplines” but also in “the social sciences, the cognitive sciences, and the humanities.” How can we even begin to imagine the next phase of AI if we do not take into account human interactions, societal norms, and other aspects of the social sciences? As Jordan says, we need to “broaden our scope, tone down the hype and recognize the series challenges ahead.”
See the full article on Medium here.