Familiarity with least squares linear regression
Ability to program in R (preferred) or in Python
Supervised statistical learning involves building a model for predicting an output (response, dependent) variable based on one or more input (predictor) variables. There are many areas where such a predictive question is of interest—for example, Netflix recommendations, self-driving cars, predicting disease status/vulnerability and finding early markers of diseases.
In unsupervised statistical learning, there are only input variables but no supervising output (dependent) variable; nevertheless, we can learn relationships and structures from such data using cluster analysis and methods for dimension reduction.
This course provides a firm theoretical basis for understanding and evaluating statistical learning techniques and teaches the skills to apply and evaluate them.
Statistical Learning is very similar to an area within computer science called "machine learning" since many methods have their origin in computer science (pattern recognition, artificial intelligence).
The supervised learning methods discussed will include various classical and state-of-the-art classification methods: linear regression, regularized regression (Ridge, Lasso, and other L1- methods), naive Bayes, decision trees, logistic regression, splines, random forests, support vector machines, and deep learning. We explain interrelations between these methods and analyze their behavior.
We will also discuss model selection, where we consider both classical and state of the art methods, including various forms of cross-validation and permutation tests.
With regard to unsupervised learning, we consider methods for clustering (i.e., the classic k-means but also more advanced methods) and dimension reduction methods (like PCA, ICA and others).
An introduction to Statistical Learning.
See the Leiden University students' website for the Statistical Science programme -> Schedules
Mode of Instruction
Lectures and computer practicals. We will use Brightspace to share all course material.
A written exam (50%).
Two homework assignments (each 25%) The final homework grade will be determined as the average of the two assignments' grades, without any rounding.
It is required to have a passing score both for the assignments and for the exam. This means at least a 5.5 average for the assignments and a 5.5 for the exam.
Date information about the exam and resit can be found in the course schedule.
- T. Hastie, R. Tibshirani, J. Friedman (2009). The Elements of Statistical Learning, (2nd edition) (available for free at https://web.stanford.edu/~hastie/Papers/ESLII.pdf)
Bishop, C. M. (2006). Pattern recognition and machine learning (1st edition). Springer.
Murphy, K. P. (2012). Machine learning: A probabilistic perspective. MIT Press.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press. (available for free athttp://www.deeplearningbook.org/)
Schölkopf, B., & Smola, A. J. (2002). Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press.
Vapnik, V. N. (1998). Statistical learning theory. Wiley.
Make sure to enroll in Brightspace for grades and course materials and course updates.
To be able to obtain a grade and the ECTS for the course, sign up for the (re-)exam in uSis ten calendar days before the actual (re-)exam will take place. Note, the student is expected to participate actively in all activities of the program and therefore uses and registers for the first exam opportunity.
Exchange and Study Abroad students, please see the Prospective students website for information on how to apply.
Julian Karch: firstname.lastname@example.org
- This is an elective course of the Master Statistical Science for the Life and Behavioural sciences / Data Science.