Machine Learning - A.Y. 2018/2019

Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions, rather than following strictly static program instructions. Machine learning algorithms can be applied to virtually any scientific and non-scientific field (health, security and cyber-security, management, finance, automation, robotics, marketing..).

Instructor Telephone Office hours Studio
Paola Velardi 06-49918356 send e-mail Via Salaria 113 - 3° floor n. 3412

Course schedule

FIRST semester:

When   Where
Monday 14.00-16.30 aula 1 castro laurenziano
Thursday 14:00-16:30 aula 1 castro laurenziano

Important Notes

The course is taught in English. Attending classes is HIGHLY recommended (homeworks, mid-term, laboratory)

Homeworks and self-assessment tests are distributed via the google group, you MUST register

Summary of Course Topics

The course introduces motivations, paradigms and applications of machine learning. This is to be considered an introductory course. An advanced course is offered during the second semester: Deep Learning and Applied Artificial Intelligence.

Topics Supervised learning: decision trees, instance-based learning, naīve Bayes, support vector machine, neural networks, introduction to deep learning, ensamble methods. Unsupervised learning: clustering, association rules. Semi-supervised learning: Reinforcement learning. Genetic algorithms and genetic programming. Building machine learning systems: feature engineering, model selection, hyperparameter tuning, error analysis.


In-class labs (bring your computer on Lab days!) are dedicated to learning to design practical machine learning systems: feature engineering, model selection, error analysis. We will use mostly the scikit-learn library, and Tensor Flow

After a couple of introductory labs, labs will be organized in challenges.

Lab material (slides, datasets for challenges) will be provided before lab days via the Google group. Lab assistant is Dr. Stefano Faralli.

Pre-requisites (IMPORTANT!)

Students with insufficient programming background should take at least a Phyton course.

Basic skills in math, algorithms and probability are also necessary.

We will NOT spend time during the course to cover background competences that master students in Computer Science are expected to have already.


There are plenty of on-line books and resources on Machine Learning. We list here some of the most widely used textbooks:

Additional useful texts:


A dataset search engine:

Exam rules (read carefully)

  • Written exam on course material (50% of final grade)
  • Scikit-learn/Tensor Flow (or with other tools, however labs will use sckit & tensor flow) project (50%)
  • Depending upon the number of students attending the course, every year we decide if it is feasible to split the exam in Two: mid-term and second-term. Mid-term is held ariond the second week of November.
  • Self-assessment questions are distributed after each lesson to members of the Google group. The written exam will include closed questions and open questions similar to those in Self-assessments.
  • IMPORTANT: the exam questionnaire will include a set of (relatively simple) closed questions and a 2-4 (depending on complexity) open questions, both on practical and theoretical issues. Closed questions are a FILTER: students that will not answer correctly at least 70% of the closed questions will be rejected.
  • IMPORTANT: To assess the number of participants in each written exam a Google form will be sent via the Google group about two weeks BEFORE the exam date. Please check your @studenti mail on a regular basis. Please note that registering to a test date via the Google form does not exempt you from registering on INFOSTUD. I cannot register your final grade in a given exam session IF YOU DID NOT REGISTERED on INFOSTUD for that session. Furthermore, to register a grade I need both the result of the written test AND the project (and they must both be >=18). However, you do not need to deliver both simultaneously. You can, e.g., pass the test on January and deliver the project on June. I will then register on June.
  • IMPORTANT: during the test you can't use ANY material. You need to bring with you pen, paper, calculators (cellular phone is ok but it must be visible on the desk).

Project 2016 (Fall): Predicting forest cover type

The project is described here

The data set can be downloaded from DRIVE


The spring 2016 project was be a competion among student teams (max 3 students per team). The task is to predict the winner of a Role Playing Game (RPG) with direct clash. Students will be given a large dataset with detailed information on thousands of games, including the ID of the two competitors, the date of the match and the winner ID. The students will deliver the Predictor by the end of June (according to precise project specifications). Instructors will feed the systems with the details of additional games (not in the learning set) and compute the precision of each system at predicting the winner ID.

The project description is found here The learning dataset coun be downloaded here

Project 2017-18 and 18-19


How a project is evaluated:

  • Simple problem, easy-to-model easy-to-describe instances, small dataset, standard ML learning algorithms: 20-24
  • Simple problem, feature engineering needed, medium-large datset, use of algorithms on available platforms, use of sckit-learn or a more efficient implementation of existing algorithm (e.g. some ad-hoc software developed), performance evaluation: up to 25-28
  • Original problem, complex dataset with non-trivial feature engineering, torough data analysis and feature/hyper-parameter fitting, not straightforward use of algorithms or new algorithm or ad-hoc implementation, performance evaluation and insight on results: up to 30 L

Three very good projects: Deep-Reinforcement-Learning-Proyect-Documentation-Alfonso-Oriola.pdf, A Framework for Genetic Algorithms, RainForestML2016Pantea.pdf,

NOTE: Please read carefully how a project is evaluated, and read the two project examples above (they have been both rated 30L). Once a project is delivered and evaluated, the students cannot complain that the the grade is too low. We are here providing clear indications of what is expected to get the maximum grade. We also expect original work: plagiarism will be punished.

Google Group


Please Subscribe to Machine Learning 2018 Group Machine Learning 2018-19 on Google Groups

Slides and course materials (download only those with date=2018)

Timetable Topic PPT PDF Suggested readings
2018 Introduction to ML. Course syllabus and course organization.   ML2018Introduction.pdf  
2018 Building ML systems 2.BuildingMachineLearningSystems.pptx 2.BuildingMachineLearningSystems.pdf (Chapter 1)
2018 Classifiers: Decision Trees 3.dtrees.ppt 3.dtrees.pdf

Decision Trees:

Random Forests:

2018 Practical ML: feature engineering 4.Feature_Engineering.pptx 4.Feature_Engineering.pdf

See also "Google AutoML " project for hyperparameter tuning with structured data:

2018 Performance Evaluation: error estimates, confidence intervals, one/two-tail test 4.evaluation.ppt 4.evaluation-compressed.pdf chapter5-ml-EVALUATION.pdf
2018 Neural Networks 5.neural.pptx 5.neural.pdf

2018 Deep Learning (Convolutional NN and denoising autoencoders) 5b.Deeplearning.pptx 5b.Deeplearning.pdf

see pointers in slides

and also

2018 Ensemble methods (bagging, boosting) 6.ensembles.pptx 6.ensembles-compressed.pdf

2018 Support Vector Machines 7.svm.pptx 7.svm.pdf SVM.pdf
2018 Probabilistic learning: Maximum Likelyhood Learning, Naive Bayes 8.naivebayes.pptx 8.naivebayes.pdf


Note: community detection (aka of clustering) is presented

in Web and Social Information Extraction during 2nd semester

  Unsupervised learning: Association Rules      
2018 Unsupervised Learning: Reinforcement Learning and Q-Learning 10.reinforcementQ.pptx 10.reinforcementQ.pdf

  Unsupervised Learning: genetic Algorithms      

Syllabus (2018-19)

  • What is machine learning. Types of learning. Workflow of ML systems.
  • Classifiers. Decision Tree Learning. Random Forest
  • Feature engineering
  • Evaluation: performance measures, confidence intervals and hypothesis testing
  • Ensamble methods
  • Artificial Neural Networks
  • Deep learning (Convolutional networks, Denoising Autoencoders)
  • Support Vector Machines
  • Maximum Likelyhood Learning and Naive Bayes
  • Unsupervised Rule learning: Apriori algorithm and frequent itemset mining
  • Reinforcement learning and Q-Learning, Deep Q
  • Tools: Weka, Scikit-learn, Tensor flow
Edit | Attach | Watch | Print version | History: r262 < r261 < r260 < r259 < r258 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r262 - 2019-07-15 - PaolaVelardi

ATTENZIONE: per lavori ACEA sulla cabina elettrica il server resterā spento
nei giorni 19-22 agosto.
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback