Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Stanford Designs new Chip to Improve AI Computing Efficiency

August 24th, 2022 / in AI, research horizons, Research News / by Maddy Hunter

Edge artificial intelligence (AI) is the deployment of AI devices at the edge of networks, in other words these devices are collecting and computing data close to the user. An example of this is a self-driving car. Data pertaining to the proximity of other cars, traffic and obstacles are being collected and computed by the car rather than in a cloud computing facility or private data center.

These technological capabilities enable organizations to increase automation and improve processes, efficiency and safety. Currently these edge devices are limited by their battery power. A massive amount of the technology’s energy goes towards moving the data between the compute unit (where the data is processed) and the memory unit (where the data is stored). The key to increasing capabilities and the energy efficiency of these devices is to reduce the movement of data between these two units.

Designed in collaboration with the lab of Gert Cauwenberghs at the University of California, San Diego, Stanford University engineers set out to solve this problem and have identified a potential solution – NeuRRAM. The novel chip integrates resistive random-access memory (RRAM) to retain data even when the power is off and compute-in memory (CIM) to enable AI computing directly in the memory unit. These features eliminate strenuous data transfers between the compute and memory units and enable storing large AI models in a small area footprint while consuming very little power. The NeuRRAM is said to be twice as energy efficient as the newest models and just as accurate.

“Having those calculations done on the chip instead of sending information to and from the cloud could enable faster, more secure, cheaper, and more scalable AI going into the future, and give more people access to AI power,”    – H.-S Philip Wong, the Willard R. and Inez Kerr Bell Professor in the School of Engineering.

The Computing Community Consortium wrote a Quadrennial Paper in 2020 (a series of white papers written by the computing community to inform new administrations) on AI at the Edge. The paper discusses potential uses of deploying AI devices “at the edge” and identifies requirements and areas of research that need to be explored before the implementation these systems is realized.

You can read more about the NeuRRAM and the research that went into it on Stanford News.

 

Stanford Designs new Chip to Improve AI Computing Efficiency

Comments are closed.