Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


CCC Q&A: Trustworthy Intelligent Systems From An Interdisciplinary Lens

September 5th, 2024 / in CCC / by Petruce Jean-Charles

CCC spoke with one of its council members, Rachel Greenstadt about her work in trustworthy intelligent systems and her approach to this research through an interdisciplinary lens.

What interests you about trustworthy intelligence systems?

When I say intelligent systems, I mean socio-technical systems that include humans and computers collaborating. Humans can be awful sometimes, but also great. Intelligent systems can do delightful things, but are also really gullible and lack social sophistication. To realize their benefits, we need to figure out how they can fit into our society and enhance the best rather than worst instincts of humanity. It also touches on a lot of different areas so as someone who gets bored easily, there’s always some fascinating angle to look at.

How can trustworthy intelligence systems from an interdisciplinary lens address current societal issues?

This is a complicated, multi-faceted problem that happens to be really important. It is one of the key problems and questions facing humanity in my lifetime and it is exciting to be part of that. Most people now would believe that the printing press was a good thing, giving rise to the renaissance, the scientific revolution, and the enlightenment. However, at the time, it also created a lot of societal disruption, undermining societal institutions and resulting in decades of war and violence. Navigating rapid technological change thoughtfully can hopefully make living through this period in human history exciting rather than dystopian.

Can you give us an example of a research project that addresses those societal issues?

Here are a few examples:

  1. Trying to understand the deepfake creation community and ecosystem. How hard is it for a novice to create a deepfake? What do people who make deepfakes talk about online?
  2. Can we build intelligent systems that detect propaganda techniques or calls to internet harassment?
  3. Can we detect AI generated text and code? What makes it stylistically different from human generated text and code?
  4. What is the impact of generative AI on crowdsourced systems such as reviews, wikis, etc? 

Where do you see the future of trustworthy intelligence systems in 10 years?

I think over the next 10 years we are going to be figuring out the threats and promises of intelligent systems, but these systems will have a shifting set of underlying capabilities. We will start to figure out how to fit these capabilities into our lives and work around their weaknesses. For privacy and security, a lot will depend on whether it continues to be the case that to get good results you need very large, centralized, expensive models or if smaller, edge-based models can do almost as well. There is a ton of uncertainty, so I don’t want to speculate too much. 

In the next decade, navigating the balance between the evolving capabilities of intelligent systems and their integration into society will be crucial for addressing both their potential and their vulnerabilities, shaping how they enhance or challenge our collective future.

CCC Q&A: Trustworthy Intelligent Systems From An Interdisciplinary Lens