In recent years we witness a rapidly growing gap between the amount of collected data and data processing capabilities of conventional computers. This is not surprising: according to the Moore’s Law, the processing power of an “average computer” doubles every 18 months, while, according to Lyman and Varian from Berkeley, the amount of stored data doubles every 12 months. In addition to this growing gap, there is an increasing need to analyze the data more quickly, more precisely, and more “intelligently”. In addition to the traditional data mining tasks: classification, regression and clustering, some new challenges emerged, which require completely new algorithms for:
analysis of big networks: web pages, social networks (Facebook, Twitter), traffic, financial networks
recommender systems: Amazon, Netflix
digital forensics: analysis of data related to cybercrime
analysis of large text corpora (Wikipedia, Github, Twitter)
scientific data mining (bioinformatics, astronomy, physics)
analysis of sensor data
In order to cope with this overwhelming data flow, several frameworks for distributed data mining, together with specialized data mining algorithms, have been invented, e.g., Hadoop and MapReduce, Spark, DASK. During the seminar students (organized in small teams) will work on some challenging data mining problems (selected by themselves), performing experiments on multi-core computers (from our Data Science Lab) or cluster computers (DAS4 or DAS5) and reporting on their problems, approaches, results during weekly meetings. Each team will have to summarize their work in a final report.
During the seminar, students will:
gain detailed knowledge of some modern tools used in distributed data mining
gain some hands-on experience with mining big data sets on distributed platforms
learn to work together is small research teams
identify some promising research directions
The most recent timetable can be found at the students' website.
Mode of instruction
Weekly online presentations and discussions
An experimental research project
Total hours of study: 168 hrs.(= 6 EC)
Attending the meetings: 26 hrs.
Practical work: 96 hrs.
Reporting: 32 hrs.
Presentations: 14 hrs.
The grade will be based on 3 components:
software developed during the seminar (30%)
final report (40%)
A. Rajaraman, J. Leskovec, and J. Ullman, Mining of Massive Datasets
Additional materials (articles, links, data sets, ...) will be distributed during the first meeting.
You have to sign up for courses and exams (including retakes) in uSis. Check this link for information about how to register for courses.
Please also register for the course in Blackboard.
Lecturer: dr. Wojtek Kowalczyk
Skype for Business: firstname.lastname@example.org