Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Listen to Catalyzing Computing Podcast, Episode 36 – Computer Architecture with Mark D. Hill (Part 2)

June 21st, 2021 / in AI, Announcements, podcast / by Khari Douglas
Mark Hill

Mark D. Hill

A new episode of the Computing Community Consortium‘s (CCC) official podcast, Catalyzing Computing, is now available. In this episode, Khari Douglas (CCC Senior Program Associate) interviews Dr. Mark D. Hill, the Gene M. Amdahl and John P. Morgridge Professor Emeritus of Computer Sciences at the University of Wisconsin-Madison and the Chair Emeritus of the CCC Council. This episode was recorded prior to Dr. Hill joining Microsoft as a Partner Hardware Architect with Azure. His research interests include parallel computer system design, memory system design, computer simulation, deterministic replay and transactional memory. In this episode, Hill discusses the importance of computer architecture, the 3C model of cache behavior, and overcoming the end of Moore’s law.

Below is a highlight from the discussion of the impact of AI on the future of computing architecture. It is lightly edited for readability and the full transcript can be found here

[Catalyzing Computing Episode 36 – starting at 8:50] 

 

Khari: So what would you say is your highlight of the time you spent with the CCC?

 

Mark: Well, the highlight probably has to be the 2018 AAAI/CCC 20-year roadmap for artificial intelligence, even though I was not…I mean, I was helping to catalyze this, I was chair of the organization and I played bad cop to help get things out, but other people did more of the work from the CCC side, Liz Bradley and Ann Drobnis and others. But this was a really big deal and it has already catalyzed and is referenced in some pretty significant NSF programs. CRA government affairs shopped it around the government and I expect the biggest impact is to come. 

 

The key trick with AI was…well AI is pretty hot in the industry, so what do we need this roadmap for? It turns out there are things from academia that can complement industry and create a sum greater than its parts. These often include things that are a longer-term focus, and they can be issues that are maybe not industry’s number one concern. Like, social justice may not be industry’s number one concern. Maybe fairness is, maybe fairness isn’t? We could address things like that, and I think it’s a very nice, albeit longer than I would like, document.

 

Khari: Yeah, I think it’s over 100 pages. But people that are interested should check that out and there will be links on the podcast webpage if you want to read more [read the full report here].

 

Mark: There is an executive summary that’s way shorter.

 

[Laughter]

 

Khari: That’s true. So how do you think the proliferation of AI has impacted the hardware space?

 

Mark: So, artificial intelligence has the potential to change a lot in society, hopefully mostly good. The current way it’s done is…the greatest successes have been in a part of machine learning — which is a part of AI — called deep neural networks. These currently analyze a tremendous amount of data with a tremendous amount of computation, and if we could do that even more effectively then machine learning could be used in even more situations. A big step to greater effectiveness was moving from regular processing cores to general purpose GPUs (Graphics Processing Unit), which did the data-level parallelism that we discussed before. 

 

Now there are efforts afoot to do very specialized accelerators, as we’ve discussed before, for machine learning, such as Google’s Tensor Processing Unit (TPU). I think we’re going to see much more of that for deep neural networks. As AI starts expanding to other things, not just deep neural networks, I think it’s important enough that hardware will be developed for that. 

 

Interestingly, there is a feedback path — we also have to design the hardware. So there are some small new efforts on trying to take machine learning and apply it back to the design and optimization of hardware to maybe exceed…the human designers instead of doing the design, they’re doing the configuration of the AI to do the design. 

 

I think we could get a really nice synergy. I mean, you hear all this talk about AI, you might think it’s hype, but it’s pretty real.

Listen to the full interview with Dr. Hill below or find it on Apple Podcasts | Spotify | Stitcher | Blubrry | Google Podcasts | iHeartRadio | Soundcloud | Youtube If you prefer to read rather than listen, the transcript of the interview is available here.

If you are interested in appearing in an episode of the Catalyzing Computing podcast or want to contribute a guest post to the CCC blog, please complete this survey through Google Forms.

If you listen to the podcast, please take a moment to complete this listener survey – this survey will help us learn more about you and better tailor the show to the interests of our listeners.

Listen to Catalyzing Computing Podcast, Episode 36 – Computer Architecture with Mark D. Hill (Part 2)

Comments are closed.