The following is a special contribution to this blog by Ian Stevenson. Stevenson was a 2011-2013 Computing Innovation Fellow (CIFellow) at the Redwood Center for Theoretical Neuroscience at the University of California at Berkeley. He is now an Assistant Professor at the University of Connecticut in the Department of Psychology.
In 1939 Nobel Prize winner E.D. Adrian started off one of his famous papers by saying, “Although it is easy to demonstrate the electrical activity of the brain we are still some way from understanding the full meaning of our records.” 75 years later this could still be the lede of just about any paper in neuroscience. A key challenge of “understanding the full meaning of our records” is computational. The brain computes, and it is often difficult to relate neural data to the brain’s computation. How can we link neural recordings, like the ones Adrian collected, to learning, perception, and behavior? In this post I’ve been invited to tell you a little bit about my experience working on this problem as a neuroscientist in the Computing Innovation Fellows program and my own take on the growing space for computing research in brain science.
My path to computational neuroscience has been a bit roundabout. As an undergraduate I majored in physics, but had a strong interest in both neuroscience and programming. My junior year, I was lucky enough to have a machine learning professor (Prof. Devika Subramanian at Rice University) who was supportive of my interest in the brain and helped connect me to a neuroscience lab. In graduate school I joined the Bayesian Behavior Lab at Northwestern, working with Prof. Konrad Kording. At Northwestern I had a chance to focus my interests and gain an expertise in Bayesian modeling and data analysis.
During my CIFellowship I worked at the Redwood Center for Theoretical Neuroscience mentored by Prof. Bruno Olshausen. The Redwood Center (originally founded by Jeff Hawkins, before moving to UC Berkeley) is an exciting environment for computational research. It’s not unusual to realize at 5pm that you’ve spent most of a day getting feedback on a project from someone who invented a famous method like sparse coding or debating someone who created independent components analysis.
At the Redwood Center I had a chance to extend my expertise, which was mostly in supervised learning methods, to unsupervised learning. One difficulty in “understanding the full meaning” of neural recordings is that we can currently observe only a tiny fraction of the system (~100 out of 100 billion neurons). During my postdoc, I developed methods for learning structure in this hidden activity based on limited recordings.
Postdoctoral training is almost required in neuroscience, with many experimental neuroscientists spending 5+ years as postdocs before finding faculty positions. Getting independent funding for a postdoc is still rare and quite competitive, though. I was very excited to learn about the mission of the CIFellows program and CRA’s enthusiasm for interdisciplinary fellows. Most importantly, my CIFellowship helped me establish independence as a researcher. This past fall I started as an Assistant Professor in the psychology department at the University of Connecticut. As I’m starting up my lab and taking on a whole new set of teaching and mentoring responsibilities, I’m tremendously grateful for my CIFellowship experience. It allowed me not only to develop a new set of technical skills, but also to develop a deeper appreciation of mentoring and a much clearer plan for my future research.
My own view is that, although computational neuroscience has traditionally been dominated by physicists and applied mathematicians, there is a growing role for computer scientists and statisticians. Like many fields, neuroscience is at the beginning of a data deluge. Advances in electronics and imaging now allow the activity of hundreds to thousands of neurons to be recorded simultaneously, generating gigabytes of data every second. In fact, neural recordings are growing near-exponentially, similar to Moore’s law. As methods for recording neural activity improve and neural data becomes more complex, new statistical and computational techniques that link this data to circuit function, perception, and action are playing a central role in our understanding of the brain.
Ideas from neuroscience have served as inspiration for machine learning, robotics, and computer vision since these fields started. But many computational neuroscientists, rather than designing algorithms that mimic what we know about the brain, have started to try to make sense of the brain using machine learning. At the Neural Information Processing Systems Conference (NIPS) this past December — although much of the focus was on Mark Zuckerberg’s visit to the Deep Learning Workshop — there were two great workshops that focused on machine learning in neuroscience: Acquiring and analyzing the activity of large neural ensembles and High-Dimensional Statistical Inference in the Brain.
Computing and data analysis are also big parts of the recently announced BRAIN Initiative and the Human Brain Project. In addition to the Redwood Center, several centers have started to explicitly combine machine learning and computational neuroscience research, such as The Gatsby Center at the University College London and the Grossman Center at Columbia University. These are just a few of the new opportunities for computer scientists and statisticians to become involved in neuroscience. With any luck, in a few decades we’ll be able to say that it is easy not only to demonstrate the electrical activity of the brain, but also to understand it.