|
||
Last update: doc. Mgr. Barbora Vidová Hladká, Ph.D. (15.05.2020)
|
|
||
Last update: doc. Mgr. Barbora Vidová Hladká, Ph.D. (15.05.2020)
The aim of the course is to present the Machine Learning process from both theoretical and practical point of view. Students get familiar with the theoretical foundations of selected algorithms and learn to practically solve Machine Learning problems using libraries of the statistical system R. Students must be able to comprehensively solve an example machine learning problem and analyze and describe solution variants and their evaluation. |
|
||
Last update: doc. Mgr. Barbora Vidová Hladká, Ph.D. (29.04.2021)
During the term students have to 1) present easy homework, 2) submit two homework assignments so that the total score exceeds the required score limit, and 3) pass two written tests so that the total score exceeds the required score limit.
Obtaining the course credit is a prerequisite for taking the exam.
More details about homework assignments and tests are available on the course web site. |
|
||
Last update: doc. Mgr. Barbora Vidová Hladká, Ph.D. (15.05.2020)
James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani: An Introduction to Statistical Learning. Springer, 2013. Lantz, Brett: Machine Learning with R. Packt Publishing, 2013. |
|
||
Last update: doc. Mgr. Barbora Vidová Hladká, Ph.D. (15.05.2020)
The exam is oral. However, the results of written tests and homework assignments are taken into account. Obtaining the course credit is a prerequisite for taking the exam.
The examination requirements correspond to the course syllabus. More details are available on the course web site. |
|
||
Last update: doc. Mgr. Barbora Vidová Hladká, Ph.D. (15.05.2020)
Machine learning - basic concepts, examples of practical applications, theoretical foundations. Supervised and unsupervised learning. Classification and regression tasks. Classification into two, or more classes. Training and test examples. Feature vectors. Target variable and prediction function. Machine learning development process. Curse of dimensionality. Clustering.
Decision tree learning. Learning algorithm, splitting criteria and pruning. Random forests.
Linear and logistic regression. Least squares methods. Discriminative classifiers.
Instance-based learning. k-NN algoritmus.
Naive Bayes classifier. Bayesian belief networks.
Support Vector Machines. Large and soft margin classifier. Kernel functions.
Ensemble methods. Unstable learning algorithms. Bagging and boosting. AdaBoost algorithm.
Parameters in machine learning. Hyperparameters tuning. Searching parameter space. Gradient descent algorithm. Maximum likelihood estimation.
Experiment evaluation. Working with development and test data. Sample error, generalization error. Cross-validation, leave-one-out method. Bootstrap method. Performance measures. Evaluation of binary classifiers. ROC curve.
Statistical tests. Statistical hypotheses, one-sample and two-sample t-tests, chi-square tests. Significance level, p-value. Using statistical tests for classifier evaluation. Confidence intervals.
Overfitting. How to recognize and avoid. Regularization. Bias-variance decomposition.
General principles of feature selection. Feature selection using information gain, greedy algorithms. Dimensionality reduction, Principal Component Analysis.
Foundations of Neural Networks. Single Perceptron, Single Layer Perceptron. The architecture of multi-layer feed-forward models and the idea of back-propagation training. Remarks on deep learning. |