Each Lecturer will hold one or two lesson(s) on a specific topic. The Lecturers below are confirmed.
TopicsComputational Neuroscience, Behavioral Neuroscience, Decision Making, Learning Brain Connectivity
Timothy E.J. Behrens FRS is a British neuroscientist. He is Deputy Director of the Wellcome Centre for Integrative Neuroscience and Professor of Computational Neuroscience at the University of Oxford, and Honorary Lecturer, Wellcome Centre for Imaging Neuroscience, University College London. He earned an M.Eng. and a D.Phil. from the University of Oxford. In 2020 he won the UK Life Sciences Blavatnik Award for Young Scientists, having been a finalist for this award in 2018 and 2019. He was elected a Fellow of the Royal Society in the same year.
Deputy Director, Centre for Functional MRI of the Brain (University of Oxford);
Professor of Computational Neuroscience (University of Oxford);
Honorary Lecturer, Wellcome Centre for Imaging Neuroscience (UCL)
The cellular representations and computations that allow rodents to navigate in space have been described with beautiful precision. In this talk, I will show that some of these same computations can be found in humans doing tasks that appear very different from spatial navigation. I will describe some theory that allows us to think about spatial and non-spatial problems in the same framework, and I will try to use this theory to give a new perspective on the beautiful spatial computations that inspired it. The overall goal of this work is to find a framework where we can talk about complicated non-spatial inference problems with the same precision that is only currently available in space.
TopicsArtificial Intelligence, Neuroscience, Cognitive Psychology, Cognitive Science
Prof. Matthew Botvinick
Director of Neuroscience Research and Team Lead in AGI Research, DeepMind
Honorary Professor, Gatsby Computational Neuroscience Unit, UCL
Matthew Botvinick is Director of Neuroscience Research at DeepMind and Honorary Professor at the Gatsby Computational Neuroscience Unit at University College London. Dr. Botvinick completed his undergraduate studies at Stanford University in 1989 and medical studies at Cornell University in 1994, before completing a PhD in psychology and cognitive neuroscience at Carnegie Mellon University in 2001. He served as Assistant Professor of Psychiatry and Psychology at the University of Pennsylvania until 2007 and Professor of Psychology and Neuroscience at Princeton University until joining DeepMind in 2016. Dr. Botvinick’s work at DeepMind straddles the boundaries between cognitive psychology, computational and experimental neuroscience and artificial intelligence.
The last few years have seen some dramatic developments in artificial intelligence research. What implications might these have for neuroscience and psychology? Investigations of this question have, to date, focused largely on deep neural networks trained using supervised learning, in tasks such as image classification. In this talk, I’ll discuss another area of recent AI work which has so far received less attention from neuroscientists and psychologists, but which may have more profound implications: Deep reinforcement learning. Deep RL provides a rich framework for studying the interplay among learning, representation and decision-making, offering to the brain sciences a new set of research tools and a wide range of novel hypotheses. I’ll provide a high level introduction to deep RL and survey some of its key implications for research on the brain and behavior.
Professor Claudia Clopath is based in the Bioengineering Department at Imperial College London. She is heading the Computational Neuroscience Laboratory.
Her research interests are in the field of neuroscience, especially insofar as it addresses the questions of learning and memory. She uses mathematical and computational tools to model synaptic plasticity, and to study its functional implications in artificial neural networks.
Prof. Clopath holds an MSc in Physics from the EPFL and did her PhD in Computer Science under Wulfram Gerstner. Before joining Imperial College, she did postdoctoral fellowships in neuroscience with Nicolas Brunel at Paris Descartes and in the Center for Theoretical Neuroscience at Columbia University. She published highly cited articles in top journals such as Science and Nature, has given dozens of invited talks and keynotes around the world, and received various prizes such as the Google Faculty Award in 2015.
Gaining a better understanding of the brain is an urgent challenge in our society, due to an aging population, which has led to a higher incidence of neurological diseases, such as Alzheimer’s and Parkinson’s disease. Neuroscience can be studied under different angles, either experimentally, by measuring different aspects of the brain, or theoretically, by constructing models that mimic the brain. Theses two approaches can work hand-in-hand, where experimental findings influence theoretical models, models allow a broader and more concise understanding, predicting new phenomena, in-turn influencing new experiments. Our lab is on the modeling side, working in tight collaboration with experimental labs. We are especially interested in the field of learning and memory, which is thought to happen when connections between neurons change, a process called synaptic plasticity. This research has two main types of applications: medical applications leading to translational research and engineering applications helping for example to design machines that approach human-like learning capabilities.
TopicsTheoretical neuroscience, Computational neuroscience, Neural coding, Population coding, Memory
Ila Fiete is a Professor in the Department of Brain and Cognitive Sciences and an Associate Member of the McGovern Institute at MIT. She obtained her undergraduate degrees in Physics and Mathematics at the University of Michigan and her M.A. and Ph.D. in Physics at Harvard, under the guidance of Sebastian Seung at MIT. Her postdoctoral work was at the Kavli Institute for Theoretical Physics at Santa Barbara, and at Caltech, where she was a Broad Fellow. She was subsequently on the faculty of the University of Texas at Austin in the Center for Learning and Memory. Ila Fiete is an HHMI Faculty Scholar. She has been a CIFAR Senior Fellow, a McKnight Scholar, an ONR Young Investigator, an Alfred P. Sloan Foundation Fellow and a Searle Scholar.
Professor Karl J. Friston MB, BS, MA, MRCPsych, FMedSci, FRSB, FRS
Wellcome Principal Fellow
Scientific Director: Wellcome Trust Centre for Neuroimaging
Karl Friston is a theoretical neuroscientist and authority on brain imaging. He invented statistical parametric mapping (SPM), voxel-based morphometry (VBM) and dynamic causal modelling (DCM). These contributions were motivated by schizophrenia research and theoretical studies of value-learning, formulated as the dysconnection hypothesis of schizophrenia. Mathematical contributions include variational Laplacian procedures and generalized filtering for hierarchical Bayesian model inversion. Friston currently works on models of functional integration in the human brain and the principles that underlie neuronal interactions. His main contribution to theoretical neurobiology is a free-energy principle for action and perception (active inference). Friston received the first Young Investigators Award in Human Brain Mapping (1996) and was elected a Fellow of the Academy of Medical Sciences (1999). In 2000 he was President of the international Organization of Human Brain Mapping. In 2003 he was awarded the Minerva Golden Brain Award and was elected a Fellow of the Royal Society in 2006. In 2008 he received a Medal, College de France and an Honorary Doctorate from the University of York in 2011. He became of Fellow of the Royal Society of Biology in 2012, received the Weldon Memorial prize and Medal in 2013 for contributions to mathematical biology and was elected as a member of EMBO (excellence in the life sciences) in 2014 and the Academia Europaea in (2015). He was the 2016 recipient of the Charles Branch Award for unparalleled breakthroughs in Brain Research and the Glass Brain Award – a lifetime achievement award in the field of human brain mapping. He holds Honorary Doctorates from the University of Zurich and Radboud University.
This overview of the free energy principle offers an account of embodied exchange with the world that associates conscious operations with actively inferring the causes of our sensations. Its agenda is to link formal (mathematical) descriptions of dynamical systems to a description of perception in terms of beliefs and goals. The argument has two parts: the first calls on the lawful dynamics of any (weakly mixing) ergodic system – from a single cell organism to a human brain. These lawful dynamics suggest that (internal) states can be interpreted as modelling or predicting the (external) causes of sensory fluctuations. In other words, if a system exists, its internal states must encode probabilistic beliefs about external states. Heuristically, this means that if I exist (am) then I must have beliefs (think). The second part of the argument is that the only tenable beliefs I can entertain about myself are that I exist. This may seem rather obvious; however, if we associate existing with ergodicity, then (ergodic) systems that exist by predicting external states can only possess prior beliefs that their environment is predictable. It transpires that this is equivalent to believing that the world – and the way it is sampled – will resolve uncertainty about the causes of sensations. We will conclude by looking at the epistemic behaviour that emerges under these beliefs, using simulations of active inference.
This presentation considers deep temporal models in the brain. It builds on previous formulations of active inference to simulate behaviour and electrophysiological responses under deep (hierarchical) generative models of discrete state transitions. The deeply structured temporal aspect of these models means that evidence is accumulated over distinct temporal scales, enabling inferences about narratives (i.e., temporal scenes). I will illustrate this behaviour in terms of Bayesian belief updating – and associated neuronal processes – to reproduce the epistemic foraging seen in reading. These simulations reproduce these sort of perisaccadic delay period activity and local field potentials seen empirically; including evidence accumulation and place cell activity. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) violations; reproducing mismatch negativity and P300 responses respectively. These simulations are presented as an example of how to use basic principles to constrain our understanding of system architectures in the brain – and the functional imperatives that may apply to neuronal networks.
Timothy P. Lillicrap is a Canadian neuroscientist and AI researcher, adjunct professor at University College London, and staff research scientist at Google DeepMind, where he has been involved in the AlphaGo and AlphaZero projects mastering the games of Go, Chess and Shogi. His research focuses on machine learning and statistics for optimal control and decision making, as well as using these mathematical frameworks to understand how the brain learns. He has developed algorithms and approaches for exploiting deep neural networks in the context of reinforcement learning, and new recurrent memory architectures for one-shot learning. His numerous contributions to the field have earned him a number of honors, including the Governor General’s Academic Medal, an NSERC Fellowship, the Centre for Neuroscience Studies Award for Excellence, and numerous European Research Council grants. He has also won a number of Social Learning tournaments.
The research in the Moran Lab focuses on computational neuroscience, computational psychiatry and computational neurology. In particular, the Moran Lab aims to join together brain connectivity analysis with their algorithmic role; i.e. what information brain connections relay. This work lies at the intersection of artificial intelligence (deep networks), Bayesian inference (variational principles) and experimental neurobiology (cognitive tasks in the scanner). Of particular interest are the role of families of neurotransmitters, such as noradrenaline, dopamine and serotonin, in prediction errors and model-based decision making. The Moran Lab uses the free energy principle as a principle to develop new methods in artificial intelligence and in disease modeling, focusing on age-related neurodegenerative disease and schizophrenia. Dr. Moran also serves as an editor for the Neuroimage and Neuroimage Clinical journals.
TopicsTheoretical Neuroscience, Machine Learning
Professor of Theoretical Neuroscience and Machine Learning,
Director, Gatsby Computational Neuroscience Unit
Maneesh Sahani is Professor of Theoretical Neuroscience and Machine Learning at the Gatsby Computational Neuroscience Unit at University College London (UCL). Graduating with a B.S. in physics from Caltech, he stayed to earn his Ph.D. in the Computation and Neural Systems program, supervised by Richard Andersen and John Hopfield. After periods of postdoctoral work at the Gatsby Unit and the University of California, San Francisco, he returned to the faculty at Gatsby in 2004 and was elected to a personal chair at UCL in 2013. His work spans the interface of the fields of machine learning and neuroscience, with particular emphasis on the types of computation achieved within the sensory and motor cortical systems. He has helped to pioneer analytic methods which seek to characterize and visualize the dynamical computational processes that underlie the measured joint activity of populations of neurons. He has also worked on the link between the statistics of the environment and neural computation, machine-learning based signal processing, and neural implementations of Bayesian and approximate inference.
Topicsneural networks, cognitive neuroscience, complex systems, meta-learning, deep reinforcement learning
Jane Wang, DeepMind, UK
She is Senior Research Scientist at DeepMind. Her background is in computational and cognitive neuroscience, complex systems, and physics. She is interested in applying neuroscience principles to inspire new algorithms for artificial intelligence and machine learning.
TopicsComputational neuroscience, theoretical neuroscience, machine intelligence, frameworks and representations for understanding the world.
James took a roundabout way to becoming a neuroscientist; after an undergraduate in physics, he changed tracks to become a medical doctor, but after yearning for understanding he started a phd in neuroscience at Oxford in 2015. He is now a Henry Wellcome postdoctoral fellow at both Oxford and Stanford University, where he works with Tim Behrens and Surya Ganguli.
Learning and interpreting the structure of the environment is an innate feature of biological systems, and is integral to guiding flexible behaviours for evolutionary viability. The concept of a cognitive map has emerged as one of the leading metaphors for these capacities, and unravelling the learning and neural representation of such a map has become a central focus of neuroscience. While experimentalists are providing a detailed picture of the neural substrate of cognitive maps in hippocampus and beyond, theorists have been busy building models to bridge the divide between neurons, computation, and behaviour. These models can account for a variety of known representations and neural phenomena, but often provide a differing understanding of not only the underlying principles of cognitive maps, but also the respective roles of hippocampus and cortex. In this talk, we bring many of these models into a common language, distil their underlying principles of constructing cognitive maps, provide novel (re)interpretations for neural phenomena, suggest how the principles can be extended to account for prefrontal cortex representations and, finally, speculate on the role of cognitive maps in higher cognitive capacities.