Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Shaping the Future of AI’s Impact on Society

February 17th, 2026 / in AI, CCC, conferences / by Haley Griffin

The buzz around artificial intelligence (AI) is undeniable, with daily headlines touting its revolutionary potential. But for AI to truly transform science and society, we need to look beyond the impressive demos and massive models and ensure we achieve the desired impacts in a deliberate, responsible, secure way.

Last week at the AAAS 2026 Annual Conference in Phoenix, Arizona, a panel organized by the Computing Community Consortium (CCC) titled “Shaping the Future of AI’s Impact on Society” captivated a crowded room of researchers and media representatives. Manish Parashar (University of Utah) moderated the panel, and the speakers were Rayid Ghani (Carnegie Mellon University), Carla P. Gomes (Cornell University), and Elham Tabassi (Brookings Institution). Haley Griffin (CCC) and Michela Taufer (University of Tennessee, Knoxville) organized the panel.

Redefining Scale

The theme of this year’s AAAS conference was “Science at Scale.” So what about scaling AI? As Ghani eloquently put it, “Scale means, ‘How do I scale the impact of the work?’ It’s not the model, it’s the impact.” He shared compelling examples from his work as part of the Data Science for Social Good program on preventing homelessness and managing mental health crises, where AI is designed to be proactive and supportive rather than reactive and autonomous. He also emphasized the importance of reusability — while a lot of models are very context-specific and need to be customizable, the ubiquity of many social issues means that a model that benefits one community is likely to help in another. The goal isn’t just a bigger, more complex algorithm, but measurable, positive changes in people’s lives. 

The panelists also challenged the traditional definition of scaling by sharing translational approaches, where research in one domain can be applied to another. As Gomes put it, “It’s easier to get funding for studying birds than for addressing social issues; large language models are powerful across domains.” She explained that models trained to track bird migration, for instance, can also be adapted to predict materials properties or to map poverty. This is especially important when research funding for social issues is limited.

Building Trust Through Robust Evaluation

A significant gap currently exists between AI’s advertised capabilities and its real-world performance. Tabassi, a key architect of the National Institute of Standards and Technology’s AI Risk Management Framework, underscored at the panel that improving trust in AI is fundamentally a socio-technical problem that requires socio-technical solutions. It is a slow, context-specific process that requires transparency about both the limits and capabilities of AI systems. “Trust is hard to build… and it doesn’t scale as fast technology scales,” she noted. She pointed out that current benchmarks for AI success often measure narrow, task-specific capabilities rather than reliable, safe, or privacy-enhancing deployments. 

Tabassi suggested a national infrastructure specifically dedicated to AI evaluation, similar to how the National Artificial Intelligence Research Resource (NAIRR) aims to provide computational data. Gomes emphasized that real-world AI systems often need to be designed with multiple objectives in mind, including social, economic, and environmental considerations, which in turn requires multi-objective Pareto optimization. As an example, she highlighted the strategic planning of hydropower expansion in the Amazon basin, where AI can help meet growing energy needs while reducing adverse impacts on both people and nature.

Addressing Resource Disparities and Systemic Challenges

The conversation also tackled practical hurdles. Academia often lacks the data and computational resources to compete with industry, particularly with foundational models, where the user queries, system responses, and infrastructure are proprietary. This creates an imbalance that hinders independent research and evaluation. Parashar also pointed out the lack of effective models and incentives for private-public partnerships. He highlighted the model being explored by the Responsible AI Initiative in Utah which uses innovation, sandboxes, and regulatory mitigation to catalyze partnerships between academic researchers, entrepreneurs, and government.

Beyond technical concerns, the panelists confronted the ethical quandaries of AI infrastructure itself. A poignant question from the audience highlighted how corporate AI data centers can negatively impact marginalized communities through pollution. The consensus was that while new AI-specific regulations are evolving, existing agencies (such as the Environmental Protection Agency [EPA] and Federal Trade Commission [FTC]) need greater resources and mandates to enforce current regulations in an AI-powered world.

The Path Forward

The panel’s insights offer a roadmap for the future of AI research:

  • Prioritize Positive Human Impact: Measure AI success by real-world outcomes and impacts, not just technical model-centric metrics.
  • Scientist LLMs: Scientific reasoning is lacking from current LLM systems; to solve a problem, they need to truly understand it, which requires thinking like a scientist.
  • Full Stack Governance: AI systems need to have end-to-end governance so that if an error occurs, it is clear which aspect is the cause (e.g., chip designer, model developer, data supplier). The governance needs to withstand robust evaluation.
  • Invest in Interdisciplinary Research: Foster collaboration between AI experts and domain scientists to build customized, reasoning-based AI. AI researchers must become pseudo-experts and savvy collaborators in the fields they are helping (e.g., materials science, or ecology, or social sciences).
  • Strengthen Evaluation: Develop robust, scientifically valid, transparent, and dynamic AI evaluation infrastructures that assess trustworthiness, reliability, and societal impact in deployed settings.
  • Center Impacted Humans: Impacted communities must lead the problem-definition process, and be involved throughout the life of a research project.
  • Democratize Access: Provide researchers, particularly in academia, with greater access to data and computing resources to encourage broad innovation and evaluation.
  • Empower Regulatory Bodies: Equip existing regulatory agencies to enforce regulations and standards in an evolving AI landscape.
  • Promote Partnerships: Develop models, mechanisms, and incentives for public-private partnerships that ensure AI innovations have a responsible impact on science and society at scale.
Get Involved

The AAAS panel was an exciting opportunity for panelists to examine the future of AI with input from community members. It is also part of a broader CCC effort to help create a thriving and responsible AI research ecosystem. We invite all members of the computing community to participate in the AI Research Ecosystem (AIRE) discussion forum, where you can share your opinions about the biggest challenges facing AI research today to shape the research pathways of tomorrow.

Shaping the Future of AI’s Impact on Society

Leave a Reply