Tags:
create new tag
view all tags

Machine Learning - A.Y. 2018/2019

Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions, rather than following strictly static program instructions. Machine learning algorithms can be applied to virtually any scientific and non-scientific field (health, security and cyber-security, management, finance, automation, robotics, marketing..).

Instructor Telephone Office hours Studio
Paola Velardi 06-49918356 send e-mail Via Salaria 113 - 3° floor n. 3412

Course schedule

FIRST semester:

When   Where
Monday 14.00-16.30 aula 1 castro laurenziano
Thursday 14:00-16:30 aula 1 castro laurenziano

Important Notes

The course is taught in English. Attending classes is HIGHLY recommended (homeworks, mid-term, laboratory)

Homeworks and self-assessment tests are distributed via the google group, you MUST register

Summary of Course Topics

The course introduces motivations, paradigms and applications of machine learning. It is to be considered an introductory course.

Topics Supervised learning: decision trees, instance-based learning, naīve Bayes, support vector machine, neural networks, deep learning, ensamble methods. Unsupervised learning: clustering, association rules. Semi-supervised learning. Reinforcement learning. Genetic algorithms and genetic programming. Issues in machine learning: feature engineering, model selection, error analysis.

Laboratory

In-class labs (bring your computer on Lab days!) are dedicated to learning the design of practical machine learning systems: feature engineering, model selection, error analysis. We will use mostly the scikit-learn library, and Tensor Flow

After a couple of introductory labs, labs will be organized in challenges.

Students with insufficient programming background should take a course on Phyton.

Lab material (slides, datasets for challenges) will be provided before lab days via the Google group. Lab assistant is Dr. Stefano Faralli.


Textbooks

There are plenty of on-line books and resources on Machine Learning. We list here some of the most widely used textbooks:

Additional useful texts:

Resources:

A dataset search engine: https://toolbox.google.com/datasetsearch

Exam rules (read carefully)

  • Written exam on course material (60% of final grade)
  • Scikit-learn/Tensor Flow (or with other tools, however labs will use sckit & tensor flow) project (40%)
  • Examples of mid-term pdf pdf
  • Self-assessment questions are distributed after each lesson to members of the Google group. The written exam will include closed questions and open questions similar to those in Self-assessments.
  • IMPORTANT: the exam questionnaire will include a set of (relatively simple) closed questions and a 2-4 (depending on complexity) open questions, both on practical and theoretical issues. Closed questions are a FILTER: students that will not answer correctly at least 70% of the closed questions will be rejected.
  • IMPORTANT: To assess the number of participants in each written exam a Google form will be sent via the Google group about two weeks BEFORE the exam date. Please check your @studenti mail on a regular basis. Please note that registering to a test date via the Google form does not exempt you from registering on INFOSTUD. I cannot register your final grade in a given exam session IF YOU DID NOT REGISTERED on INFOSTUD for that session. Furthermore, to register a grade I need both the result of the written test AND the project (and they must both be >=18). However, you do not need to deliver both simultaneously. You can, e.g., pass the test on January and deliver the project on June. I will then register on June.
  • IMPORTANT: during the test you can't use ANY material. You need to bring with you pen, paper, calculators (cellular phone is ok but it must be visible on the desk).

Project 2016 (Fall): Predicting forest cover type

The project is described here

The data set can be downloaded from DRIVE https://drive.google.com/file/d/0By8kOQZC1_qZTXJLdGx0ZkdiX2c/view?usp=sharing

Project 2016 (Spring): GAME WINNER PREDICTION CONTEST

The spring 2016 project was be a competion among student teams (max 3 students per team). The task is to predict the winner of a Role Playing Game (RPG) with direct clash. Students will be given a large dataset with detailed information on thousands of games, including the ID of the two competitors, the date of the match and the winner ID. The students will deliver the Predictor by the end of June (according to precise project specifications). Instructors will feed the systems with the details of additional games (not in the learning set) and compute the precision of each system at predicting the winner ID.

The project description is found here The learning dataset coun be downloaded here

Project 2017-18 and 18-19

Project-2018-19.pdf

How a project is evaluated:

  • Simple problem, easy-to-model easy-to-describe instances, small dataset, standard ML learning algorithms: 20-24
  • Simple problem, feature engineering needed, medium-large datset, use of algorithms on available platforms, use of sckit-learn or a more efficient implementation of existing algorithm (e.g. some ad-hoc software developed), performance evaluation: up to 25-28
  • Original problem, complex dataset with non-trivial feature engineering, torough data analysis and feature/hyper-parameter fitting, not straightforward use of algorithms or new algorithm or ad-hoc implementation, performance evaluation and insight on results: up to 30 L

Three very good projects: Deep-Reinforcement-Learning-Proyect-Documentation-Alfonso-Oriola.pdf, A Framework for Genetic Algorithms, RainForestML2016Pantea.pdf

NOTE: Please read carefully how a project is evaluated, and read the two project examples above (they have been both rated 30L). Once a project is delivered and evaluated, the students cannot complain that the the grade is too low. We are here providing clear indications of what is expected to get the maximum grade. We also expect original work: plagiarism will be punished.

Google Group

MANDATORY!!

Please Subscribe to Machine Learning 2018 Group Machine Learning 2018-19 on Google Groups

Slides and course materials (download only those with date=2018)

Timetable Topic PPT PDF Suggested readings
2018 Introduction to ML. Course syllabus and course organization.   ML2018Introduction.pdf  
2018 Building ML systems 2.BuildingMachineLearningSystems.pptx 2.BuildingMachineLearningSystems.pdf https://ai.stanford.edu/~nilsson/MLBOOK.pdf (Chapter 1)
2018 Classifiers: Decision Trees 3.dtrees.ppt 3.dtrees.pdf

Decision Trees: http://www.cs.princeton.edu/courses/archive/spr07/cos424/papers/mitchell-dectrees.pdf

Random Forests: http://www.math.mcgill.ca/yyang/resources/doc/randomforest.pdf

2018 Practical ML: feature engineering 4.Feature_Engineering.pptx 4.Feature_Engineering.pdf http://www.machinelearningtutorial.net/2017/06/17/feature-engineering-in-machine-learning/
2018 Performance Evaluation: error estimates, confidence intervals, one/two-tail test 4.evaluation.ppt 4.evaluation-compressed.pdf chapter5-ml-EVALUATION.pdf
2018 Neural Networks 5.neural.pptx 5.neural.pdf

https://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf

https://www.cs.swarthmore.edu/~meeden/cs81/s10/BackPropDeriv.pdf

2018 Deep Learning (Convolutional NN and denoising autoencoders) 5b.Deeplearning.pptx 5b.Deeplearning.pdf

see pointers in slides

and also https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/

2018 Ensemble methods (bagging, boosting) 6.ensembles.pptx 6.ensembles-compressed.pdf

https://onlinelibrary.wiley.com/doi/pdf/10.1002/widm.1249

2018 Support Vector Machines 7.svm.pptx 7.svm.pdf SVM.pdf
2018 Probabilistic learning: Maximum Likelyhood Learning, Naive Bayes 8.naivebayes.pptx 8.naivebayes.pdf

http://www.cs.columbia.edu/~mcollins/em.pdf

201x Clustering    

Note: community detection (aka of clustering) is presented

in Web and Social Information Extraction during 2nd semester

2017 Unsupervised learning: Association Rules      
2017 Unsupervised Learning: Reinforcement Learning and Q-Learning    

https://github.com/junhyukoh/deep-reinforcement-learning-papers#all-papers

https://skymind.ai/wiki/deep-reinforcement-learning

2016 Unsupervised Learning: genetic Algorithms      

Syllabus (2018-19)

  • What is machine learning. Types of learning. Workflow of ML systems.
  • Classifiers. Decision Tree Learning. Random Forest
  • Feature engineering
  • Evaluation: performance measures, confidence intervals and hypothesis testing
  • Ensamble methods
  • Artificial Neural Networks
  • Deep learning (Convolutional networks, Denoising Autoencoders)
  • Support Vector Machines
  • Maximum Likelyhood Learning and Naive Bayes
  • Unsupervised Rule learning: Apriori algorithm and frequent itemset mining
  • Reinforcement learning and Q-Learning, Deep Q
  • Tools: Weka, Scikit-learn, Tensor flow
Edit | Attach | Watch | Print version | History: r259 < r258 < r257 < r256 < r255 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r259 - 2018-12-10 - PaolaVelardi





 
Questo sito usa cookies, usandolo ne accettate la presenza. (CookiePolicy)
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2018 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback