Learning Outcomes
On completion of the course, students will be able to:
1. select and justify statistical models and methods for the data analysis of real-world problems based on reasoned argument, especially when the underlying data-generating mechanism is unknown
2. apply a range of supervised and unsupervised statistical learning algorithms to real-world problems
3. evaluate and optimise the performance of learning models and algorithms using theoretical explanation and empirical evidence, and communicate their expected accuracy and uncertainty
4. combine multiple models (e.g., through ensemble methods) to achieve higher predictive accuracy
5. conduct both theoretical and empirical comparative analyses to decide which neural network is the most suitable for a particular task
6. apply, evaluate, and optimise neural networks across a wide range of problems
7. design and implement deep learning models for real-world applications.
1. select and justify statistical models and methods for the data analysis of real-world problems based on reasoned argument, especially when the underlying data-generating mechanism is unknown
2. apply a range of supervised and unsupervised statistical learning algorithms to real-world problems
3. evaluate and optimise the performance of learning models and algorithms using theoretical explanation and empirical evidence, and communicate their expected accuracy and uncertainty
4. combine multiple models (e.g., through ensemble methods) to achieve higher predictive accuracy
5. conduct both theoretical and empirical comparative analyses to decide which neural network is the most suitable for a particular task
6. apply, evaluate, and optimise neural networks across a wide range of problems
7. design and implement deep learning models for real-world applications.
Course Content
The course has two parts: statistical learning and machine learning. The first part focuses on applied aspects of statistical learning while also including the underlying algorithms. Supervised methods are addressed with particular emphasis on classification, including logistic regression, classification trees, linear and quadratic discriminant analysis, K-nearest neighbour, and support vector machines. Regression methods, such as linear regression, splines, generalised additive models, and regression trees, are also included. In addition, unsupervised methods, such as principal component analysis, k-means and hierarchical clustering, are covered. Model validation is addressed through cross-validation and bootstrap methods. The course also discusses regularisation in model selection, analysis of high-dimensional data, and the improvement of predictive performance through methods such as model averaging, bagging, and boosting.
The second part introduces machine learning with a focus on neural networks and deep learning. The perceptron is introduced as a basic element for linear separability, and its limitations in classification problems are discussed. After this, various activation functions and the use of the sigmoid perceptron to handle non-linear problems are covered. The course also includes different types of learning paradigms, such as supervised, unsupervised, and reinforcement learning. Feedforward neural networks and algorithms for backpropagation are covered, as well as recurrent neural networks (RNNs). The course concludes with an overview of deep learning, where fundamental principles and various types of architectures are discussed in relation to their applicability to practical problems.
The second part introduces machine learning with a focus on neural networks and deep learning. The perceptron is introduced as a basic element for linear separability, and its limitations in classification problems are discussed. After this, various activation functions and the use of the sigmoid perceptron to handle non-linear problems are covered. The course also includes different types of learning paradigms, such as supervised, unsupervised, and reinforcement learning. Feedforward neural networks and algorithms for backpropagation are covered, as well as recurrent neural networks (RNNs). The course concludes with an overview of deep learning, where fundamental principles and various types of architectures are discussed in relation to their applicability to practical problems.
Assessment
• Assignment for submission
• Written examination
• Project work
• Written reflection
• Seminar
• Written examination
• Project work
• Written reflection
• Seminar
Grades
The Swedish grades U–G.
Prerequisites
- To be admitted, applicants must meet the general entry requirements for doctoral studies. Persons who have not been admitted to a doctoral programme at Dalarna University are admitted to the course depending on space availability.
Other Information
Replaces FMI2224