Machine Learning: Basic Principles release_i3mpdd5t6jf7ximhsryg6g525e

by Alexander Jung

Released as a article .

2020  

Abstract

This tutorial introduces some main concepts of machine learning (ML). From an engineering point of view, the field of ML revolves around developing software that implements the scientific principle: (i) formulate a hypothesis (choose a model) about some phenomenon, (ii) collect data to test the hypothesis (validate the model) and (iii) refine the hypothesis (iterate). One important class of algorithms based on this principle are gradient descent methods which aim at iteratively refining a model which is parametrized by some ("weight") vector. A plethora of ML methods is obtained by combining different choices for the hypothesis space (model), the quality measure (loss) and the computational implementation of the model refinement (optimization method). %Many of the current systems, which are considered as (artificially) intelligent, are based on %combinations of few basic machine learning methods. After formalizing the main building blocks of an ML problem, some popular algorithmic design patterns for ML methods are discussed. This tutorial grew out of the lecture notes developed for the courses "Machine Learning: Basic Principles" and "Artificial Intelligence", which I have co-taught since 2015 at Aalto University.
In text/plain format

Archived Files and Locations

application/pdf  2.7 MB
file_47gpzxf5zbgh7fi4sgfd4vuh3y
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-08-21
Version   v12
Language   en ?
arXiv  1805.05052v12
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 7d3f6e7d-2900-4058-90a9-ed467da417bf
API URL: JSON