Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


CCC at AAAS Panel Recap: “Maintaining a Rich Breadth for Artificial Intelligence” Q&A

April 28th, 2023 / in AAAS, AI, CCC / by Catherine Gill

This blog post is a continuation of yesterday’s summary of the Maintaining a Rich Breadth for Artificial Intelligence panel at the 2023 AAAS meeting. This panel was moderated by Maria Gini (University of Minnesota) and the panel comprised David Danks (University of California – San Diego), Bo Li (University of Illinois – Urbana-Champaign), and Melanie Mitchell (Santa Fe Institute)

 

Following the panel, Dr. Gini opened the discussion up to the audience for Q&A. The first question came from a researcher in the audience:

 

  • To what extent do you think homogeneity is an effect of cost in terms of the available hardware? Neural networks are cheap to create and manage, and they are often the most viable type of AI to study on a limited budget.

 

Dr. Danks responded, saying that the costs to achieve a performance gain have been lower for deep learning and neural networks than for other parts of AI for a while now, but that equation is beginning to change. The best advances are happening at companies that have access to massive computing resources that universities don’t. There has been a shift away from ingenuity in AI and towards computing on massive datasets that Danks stated he hoped to see rectified in the future.

 

The next question was posed by Dan Lopresti, CCC Chair and professor of Computer Science and Engineering at Lehigh University.

 

  • The chart that Dr. Danks displayed earlier showed that the ebb and flow of deep neural networks is not a new phenomenon. Is there something wrong with that ebb and flow, and is there an opportunity cost for devoting attention to deep learning vs other forms of AI? Should we focus on the cost of research more?

 

Dr. Danks responded, saying that the ebb and flow is natural, but the graph underestimates the dominance of neural network research because the graph only shows the prominence of neural networks in published research papers. A lot of industry work does not end up in published papers, and this research makes up a huge amount of the AI research conducted today. Dr. Mitchell added that she believes there is a natural tendency to jump on the bandwagon and feedback from funding agencies give rise to these bubbles in funding. The slide Dr. Danks displayed only covered machine learning methods, and not other methods of AI. Machine learning has completely taken over AI research. There is no pressure to innovate, Mitchell stated, and she expects to see more problems with neural networks in the future that will force us to innovate. Dr. Li then responded, saying she saw this issue arise in 2011 during her PhD when she worked on Neural Support Vector Machines (NSVMs) doing expensive programming work. By 2012, all research was focused on deep neural networks. It is time to reexamine neural networks and the problems they have created, said Dr. Li. Many people are now working on improving the trustworthiness of AI, a promising shift Li said she was happy to see.

 

The next question came from a medical researcher in the audience.

 

  • Listening to this discussion made me hope that we receive more help from AI in the medical field. The problem is that medical research has grown significantly and it is nearly impossible for doctors and researchers to get a grasp of all that is known about a single topic. Can I ask an AI program to provide me a summary of all research on a topic, and will it perform that task effectively? Currently the available AI platforms do not seem to be capable of doing so, and they even sometimes make up references when they cannot find real resources on a topic. Will we have an effective and trustworthy tool soon to support medical research?

 

Dr. Mitchell responded first, saying that nobody is ever correct when predicting timelines in AI, but that there will not be an all encompassing tool such as the questioner described in a short time period. More specialized tools will be developed, and there is a goal for reliable AI assistants that can scour the internet and summarize findings on a particular topic, but that goal is much harder than most people imagine. The hallucination problem with chatbots is a very difficult problem to solve, and it will take more training in real world scenarios to combat. Dr. Li agreed with Dr. Mitchell’s diagnosis of a longer timeline to seeing an effective AI medical assistant. She added that there are many strong approaches to using AI in medical research, especially in prediction of protein structure. Resources such as Chat GPT currently suffer from a low degree of accuracy, but a deeper understanding of how the algorithm has developed will help researchers refine these models to ensure their appropriateness for applications that require a high degree of precision.

 

The following question was posed by another researcher in the audience:

 

  • There are social and political components to the argument of what does and does not constitute AI, and a lot of normative policing has occurred to limit the definition. How should we increase the breadth of what constitutes AI?

 

Dr. Mitchell responded first, saying that normative policing occurs in all areas of science, but boundaries are changeable. The term “Artificial Intelligence” itself, Mitchell said, was coined to distinguish the field from neural networks, which has obviously changed. This goes back to Dr. Danks’ point of letting one thousand flowers bloom. How do you decide which flowers will bloom and which should wilt? Mitchell stated that the practice at the moment is too conservative, and it is difficult to get multidisciplinary proposals funded because they do not work well together. Dr. Danks added that there is a lot of boundary policing occurring at the moment, but the AI community is beginning to wake up to the issues that arise from all scientists researching the same things. Danks stated that he attended a major AI conference earlier that year, and there was a much more diverse set of papers present than in previous years. He noted that previously, if you didn’t publish a paper on neural networks, you would not have been invited to the conference. Dr. Li added that people are beginning to rethink what AI is, and the design goals for AI programs. 

 

The next question came from a researcher in the audience who was concerned about the energy required to train AI applications.

 

  • We know that biological intelligence can perform relatively efficiently from an energy standpoint. How can we work to bring AI closer to that benchmark moving forward?

 

Dr. Mitchell responded, saying this is absolutely true, and a lot of work in neuromorphic computing has been done to make computers function more similarly to the human brain. That, however, has more to do with computer hardware than algorithmic innovation. As long as we have infinite computing power, said Mitchell, there will not be an impetus to create more efficient algorithms, but if we want to apply these algorithms to work on real robots we will eventually be forced to rework them. Dr. Danks agreed that the breadth of hardware will need to be much more diverse to accommodate edge computing and neuromorphic approaches going forwards. Dr. Danks notes that one thing the National AI Research Resource (NAIRR) calls for is broadening the different types of compute that are available to researchers. Dr. Li added that while AI systems and biological computing differ in many ways, there is still a lot of progress we can make by drawing inspiration from nature. She pointed out that the human brain is extremely efficient, and if we were able to mimic this efficiency we would have more effective and sustainable models.

 

Thank you so much for reading! We hope you enjoyed our summaries of the CCC panels at AAAS, and we look forward to attending AAAS 2024 in Denver, Colorado.

CCC at AAAS Panel Recap: “Maintaining a Rich Breadth for Artificial Intelligence” Q&A

Comments are closed.