Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Listen to the AAAS Sci-Mic Podcasts including the CCC’s Catalyzing Computing

February 3rd, 2021 / in AAAS, Announcements, podcast, Security / by Khari Douglas

AAAS Understanding Dynamic Ecosystems (2021 Annual Meeting logo)Every February, the American Association for the Advancement of Science (AAAS) — the world’s largest multidisciplinary scientific society and the publisher of the Science family of journals — holds the AAAS Annual Meeting, which brings together scientists, engineers, and press to share and discuss their work with each other. The 2021 Annual Meeting will take place virtually next week, February 8-11. The theme of this year’s meeting is “Understanding Dynamic
Ecosystems.” 

The Computing Community Consortium (CCC) has attended and hosted sessions at the AAAS Annual Meeting since 2013 — learn more about those past sessions here. This year, the CCC’s official podcast, Catalyzing Computing, is part of the Sci-Mic virtual podcast library, alongside podcasts such as This Study Shows and Waste Not Why Not. Catalyzing Computing podcast episodes 27 and 28, “Global Security and Graph Analytics with Nadya Bliss (Part 1 & 2),” are featured in the library. In those episodes, CCC Senior Program Associate Khari Douglas interviews Dr. Nadya Bliss, a CCC Council Member and Executive Director of Arizona State University’s (ASU) Global Security Initiative (GSI). 

Before joining ASU in 2012, Bliss spent 10 years at MIT’s Lincoln Laboratory, most recently as a founding group leader of the Computing and Analytics Group. In part one of the podcast interview, Bliss discusses what drove her to pursue computer science, her time spent at Lincoln Lab, and the history of graph analytics. In part two, she discusses the work of Arizona State University’s Global Security Initiative and how to combat the spread of misinformation. 

Below is a highlight from the discussion on combating misinformation, taken from the transcript of the podcast:

[Catalyzing Computing Episode 28 – starting at 18:45] 

 

Khari: So you mentioned one of the new focuses, I guess, of GSI is on misinformation, and that’s been a hot topic of late. For context this is being recorded June 3rd, 2020, so there’s a pandemic happening currently, within the U.S. at least there’s some widespread social unrest around police brutality and racial inequality within the criminal justice system. But these issues have led to a lot of misinformation online. So, I guess, what do you think about the role of detecting and combating misinformation both from a technical perspective and also in terms of how it impacts national or global security?

 

Nadya: Yeah, it is a strange moment that we’re in. It seems sort of an unprecedented confluence of a lot of major challenges. The way that I would frame the current moment is what’s different about it…the confluence of technology with non-technical methodologies that have been deployed at a much smaller scale previously has created this environment that for an individual is very difficult to protect against. 

 

Essentially, if you think about it, every time you log into any social media platform, you are just one person — so I am just Nadya — but what I am facing is basically the supercomputing, high-performance computing power of the entire social media ecosystem and a few actors or groups potentially could be manipulating in such a way to make me think a certain way….

 

[F]undamentally, the incentive structures of social media platforms have amplified all of this. While a company like Facebook, Twitter, or Google itself  — or any media company I don’t mean to call them out — may not necessarily be interested in amplifying mis- or disinformation, if their incentives and algorithms are structured to emphasize things that are going to produce the most clicks then essentially what ends up being amplified is things that are exciting people in a negative or positive way. And from psychology we know that a lot of this information that is polarizing tends to make people go and see more, and if you are manipulating that aspect of human psychology and bringing in technology at scale, this essentially creates this environment.

 

I do want to pause here and say that one of the really, really fabulous researchers in this field is Professor [Kate] Starbird at University of Washington, and one of the things that she has shown repeatedly with her results is that a lot of the times what ends up being a key goal, or at least studied goal, is discord — where essentially people don’t know what to believe. I think we’re seeing a lot of that right now around the pandemic, the social unrest, all of it. We see a lot of discord and messages being amplified and it’s just difficult to parse. So I think this is a challenging environment to be in both as an individual and a researcher.

Listen to the full interview with Dr. Nadya Bliss here, or, if you prefer to read rather than listen, the transcripts of the podcast episodes are available — link to part one here and link to part two here

Subscribe and listen to more episodes of the Catalyzing Computing podcast here, and learn more about the AAAS Annual Meeting and register to attend on their website. 

Listen to the AAAS Sci-Mic Podcasts including the CCC’s Catalyzing Computing

Comments are closed.