The first half of this course is an introduction to probability theory. We start with the definition of a probability distribution, and discuss conditional probabilities and (conditional) independence. Next, we introduce a number of well known and much used probability distributions and consider joint distributions of multiple, dependent variables. We define the expectation, variance and covariance and finally we discuss two important Theorems: the Law of Large Numbers and the Central Limit Theorem.

The second half of the course is an introduction to statistics. On the basis of data we try to draw conclusions about the (random) process that generated those data. We discuss estimation theory where we attempt to find the probability distribution that “best” fits the data. In particular, we consider the method of maximum likelihood. Next, we look at testing hypotheses, where we try to determine if the data are consistent with some hypothesis, or not.

Both in probability theory and statistics we need quite some calculus, such as differentiation and integration of functions of one or more variables. These methods are an integral part of the course, so that little mathematical background is prerequisite.

**Literature**

Mathematical Statistics and Data Analysis. John A. Rice, Duxbury press (3-rd ed. 2007)

**Remark**

This course is a combination of lectures, problem sessions and computer practicals.