Elementary calculus and linear algebra; basics of probability theory and statistics.
The course provides an introduction to key concepts and algorithms for neural networks. It covers the following topics:
basics of statistical pattern recognition,
methods for estimation of probability distributions,
linear models, including Support Vector Machines,
single-layer and multi-layer networks,
RBF-networks and kernel methods,
algorithms for parameter optimization: line search, gradient descent, conjugate gradients, Newton, Quasi-Newton, and Levenberg-Marquardt algorithms,
data pre-processing and feature extraction,
various error functions, bias and variance, regularization.
Moreover, several real-life applications of neural networks, such as text to speech generation, handwritten character recognition, fraud detection, autonomous car driving, will be discussed.
The course will consist of weekly lectures, a few programming assignments and the final written exam.
The objective of this course are:
to provide an introduction to neural networks in the context of statistical pattern recognition,
to develop practical skills for applying neural networks for real life problems, such as image classification, speech recognition, weather forecasting,
to provide an overview of classical, general-purpose optimization algorithms,
to refresh basics of statistics, calculus, programming skills in Matlab/Octave.
The most recent timetable can be found at the LIACS website
Mode of instruction
The final grade will be the weighted average of grades for:
programming assignments (60%)
written exam (40%
See this Blackboard
C. M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1996.
You have to sign up for classes and examinations (including resits) in uSis. Check this link for more information and activity codes.
There is a limited capacity for students from outside the master Computer Science programme. Please contact the study-advisor.
Study coordinator Computer Science, Riet Derogee