Advanced Computational Neuroscience

“Advanced Computational Neuroscience” is the course of Higher School of Economics for graduate and PhD students in frame of the iBrain project.

This course provides an introduction to basic computational methods for understanding what nervous systems do and for determining how they function. We will explore the computational principles governing various aspects of behavior, vision, sensory-motor control, learning, and memory. Specific topics that will be covered include reinforcement learning models, representation of information by spiking neurons, processing of information in neural networks, and models of neuronal dynamics and biophysics. 

We will make use of exercises to gain a deeper understanding of concepts and methods introduced in the course. Introduction to mathematical techniques will be given as needed. The course is primarily aimed at masters graduate students interested in learning how the brain processes information and how to use mathematics to model brain processes.

The course "Computational Neuroscience" is new and unique discipline within the educational programs of the National Research University Higher School of Economics. The course is based on contemporary scientific research in computational neuroscience and related scientific areas. It is essential in training competent specialist in the areas of cognitive sciences and technologies.

The author of the course Boris Gutkin has significant teaching experience, including design and coordination of a similar course in the Cogmaster program at the Ecole Normale Superiuer (Pairs, France), teaching computational neuroscience lectures at the University College London, Woods Hole Matine Biology Laboratory and directing and leading several advanced summer schools in the topic.

Following module topics will be covered over several lectures each:

  1. Modeling of cognition and behavior (models of decision making; classical conditioning; operating conditioning, learning by reinforcement; neuroeconomics)
  2. Information Processing (sensory processing; linear filters and receptive fields; estimation of receptive fields; edge detectors; Hubel and Wiesel mode of visuall processing; natural image statistics, information theory, independent component analysis, neural decoding, encoding by population)
  3. Dynamics and mechanisms (biophysics of a neuron, Hudgkin-Huxley formalism, generating action potentials; feedforward and recurrent neural networks; attractors networks; energy functions, Liapunov energy; learning and synaptic plasticity; associative memories)

 

Reading and Material 

Obligatory: 

Textbook: Dayan & Abbott. Theoretical Neuroscience. MIT Press 2001.

Additional:

1. Kandel E.R., Schwartz J.H., Jessell T.M., Siegelbaum S.A., Hudspeth A. J. (Eds.) Principles of Neural Science, 5th Edition. McGraw-Hill Professional, 2012

2. Sutton & Barto: Chapter 1-6 Highly readable introduction to reinforcement learning
3. Schneiderman et al (1962). Science 136:650--652 Article on rabbit eye-blink conditioning
4. Real (1991). Science 253:980--986 Article on bumblee behavior
5. Schultz et al (1997). Science 275:1593--1599 Dopamine, conditioning, and reinforcement learning brought together
6. Daw et al (2006). Nature 441: 876--879 Reinforcement learning applied to fMRI.
7. Doya (2008). Nature Neuroscience 11: 410-416 Recent review of the literature.

8. David Marr (1980). Vision. MIT Press. Introduction Must read 
9. Parker & Newsome (1998). Annual Review of Neurosciences Nice review on detection and discrimination experiments
10. Hubel (1988). Eye, Brain, and Vision. Scientific American. Chapters 2-4.

Further reading suggestions:

Averbeck, Latham and Pouget (2006). Nature Review Neuroscience. Neural correlation, population coding and computation.

Pouget, zemel and Dayan (2000). Information processing with population codes. Nature Review Neuroscience

 

Language: English

Title
Advanced Computational Neuroscience
Date
Off