The Computing Community Consortium (CCC) supported a session at this year’s AAAS Annual Conference titled Social Technologies: Why We Can’t Live With Them Or Without Them. The panel was moderated by Sarita Schoenebeck, Professor of Information at the University of Michigan, and featured Andrés Monroy-Hernández, Associate Professor and co-leader of the Princeton HCI Lab at Princeton University, Motahhare Eslami, Assistant Professor of Software and Societal Systems at Carnegie Mellon University, and Bryan Newbold, Protocol Engineer at Bluesky.
The panelists provided their perspectives on how social technologies are impacting society.
Schoenebeck opened the panel by explaining that while there are many demonstrated benefits to social technologies (e.g., online conversations, relationships, and advocacy), there are also potential harms (e.g., misinformation, lack of safety and security). She then posed the following questions to the panelists:
- How do we put guardrails on social technologies, on social media, without compromising some basic rights and liberties?
- Social media companies have too much centralized power. Is decentralization a feasible approach to combat this?
- Artificial intelligence (AI) is and will continue to take over our social spaces. They generate sludge. How do we think about designing for spaces where humans and AI coexist in a way where humans can still get value from the interaction?
The Future of Social Media Should Be Decentralized
Monroy-Hernández opened with a discussion of the benefits and research opportunities of decentralization. Based on his experience working for multiple large technology companies, he has come to the conclusion that once a company relies on advertising in order to be profitable, they are only concerned about optimizing your engagement. It is impossible for a platform to “do no evil” when they depend on advertising, because even with good intentions, there will always be a push to keep people engaging with more content for longer.
He described how AI-generated content could be engineered to align with user preferences, removing the unpredictability and nuance that characterize human-generated content. Summarizing his point, he said, “The whole system of social technologies that depend on advertisement is flawed from the core.”
Monroy-Hernández suggested that decentralization may not be a complete solution, but it can help break complex problems into smaller, more manageable ones. He emphasized thinking in terms of protocols, not platforms, citing email as a model: users can move from Gmail to Outlook or Yahoo while retaining their content and identity, thanks to a shared protocol.
Bluesky and Mastodon represent two current platforms that utilize protocols rather than centralized infrastructure. In this system, users could choose from various apps that connect to a protocol, each with distinct monetization models and user interfaces. This flexibility would allow for a wider variety of community models.
“When the internet was created, you could destroy one node and the other nodes could continue working, because it was decentralized,” he said, highlighting the internet’s decentralized roots. At Princeton, he convinced the IT department to let his team run a Mastodon server so that any faculty member can have an account there. It can be used to connect with other people who are not in the Princeton server, just like you can send emails to indicate an affiliation with Princeton, MIT, etc.
He closed with a series of open research questions: “What is the business model of these kinds of systems, and how are they going to be sustainable in the long term? Do they rely on protocols? Do they rely on primarily venture-backed funding? If they do, if you consider the long-term strategy, is this something you want to engage with?”
Unbundling Platforms With Adversarial Interoperability
Bryan Newbold, Protocol Engineer at Bluesky, continued the conversation by outlining the benefits of decentralized protocols. Bluesky is a small, for-profit, venture-funded startup social media company with employees around the United States. It is built on an interoperable protocol designed to evolve the foundations of earlier internet standards.
Newbold noted that many Bluesky employees previously worked on attempts to build social media platforms using open protocols. Though their previous efforts struggled to gain traction, Bluesky has found significant momentum: its protocol now supports 30 million active users.
Bluesky believes its growing popularity is due to too much centralization in online media. He characterized the dominant platforms as “huge, corporate, global conglomerates” that are vertically and horizontally integrated, leaving users with limited alternatives. “This is a failure of market regulation,” he said, citing a lack of antitrust enforcement.
Despite increasing interest in decentralized platforms, there is still significant pressure to use centralized platforms to participate in the public sphere, particularly as a professional (e.g., journalists and public outreach). Newbold believes meaningful change will require decentralized systems to become more viable for public communication and outreach.
He pointed out several key challenges:
- Resistance to re-centralization: Systems tend to re-centralize, and protocols must be designed to keep data open and authenticatable so people have stronger control over their identity.
- Some centralization is helpful: Many use cases require it — especially for search capabilities, which benefit from centralized indexing.
- Small teams face big challenges: Even large companies struggle to moderate content. Bluesky has a 20-person team, and still faces moderation issues.
- Specialization with interoperability: Applications should serve specific communities (e.g., a university department) but still remain interoperable.
- Security: Protocols must be secure to deter misuse and ensure that user expectations around privacy and authenticity are met..
He compared this to the decentralization of phone systems: decades ago, switching providers was high-friction. But regulatory intervention helped unbundle these systems, making it possible today to move your number freely across providers.
Newbold closed by noting that many new platforms have emerged for private group messaging, but few options exist for public communication at scale. “Spaces for public online conversation have a unique and important role in society and enable political organizing,” he said. “It’s important to create modern options for that kind of communication.”
AI and Our Social Lives
The final panelist, Motahhare Eslami, Assistant Professor of Software and Societal Systems at Carnegie Mellon University, focused on what she called “anti-social AI”— how artificial intelligence is affecting our social relationships.
Her first key point: it’s becoming harder to discern what’s real online. Generative AI is fueling both helpful and harmful developments.
- Helpful: AI can assist with cross-cultural communication, serve as a conversation moderator, or suggest meaningful social connections.
- Harmful: It can spread false or misleading information, making authentication more difficult.
She cited a striking example: Amazon has seen a proliferation of AI-generated books about mushroom foraging. Some contain inaccurate and even dangerous information about which mushrooms are poisonous — an error that could be life-threatening.
Eslami also raised concerns about the emotional risks of interacting with AI. In one tragic case, a teenager in Florida took his own life after forming a relationship with an AI chatbot that allegedly encouraged him to do so. “Especially for introverts and adolescents, AI can seem like a viable friend — and that is a scary prospect,” she said. She emphasized the need for AI literacy and protective policies for vulnerable populations.
Compounding the risk, she noted, is that many companies are relaxing fact-checking policies, which could lead to an increase in misinformation and disinformation on social platforms.
Eslami concluded by emphasizing the need for ethical data practices in AI development. While machine learning algorithms have existed for decades, recent advances have been driven by access to vast amounts of data — some of which was gathered without consent. “We need to protect social data from being used by AI systems,” she said. “We also need to prevent AI systems from using false or misleading data that can actually make them perform worse.”
Thank You and Stay Tuned
CCC thanks the panelists for sharing their thought-provoking insights into the future of social technologies. Stay tuned for part two of our AAAS panel recap, coming soon to the CCC blog, which will explore the conversation around platform accountability and the evolving role of computing researchers in shaping public policy.