Definition Tom M.
Mitchell provided a widely quoted definition: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. [1] Generalization Generalization is the ability of a machine learning algorithm to perform accurately on new, unseen examples after training on a finite data set. The core objective of a learner is to generalize from its experience. 2] The training examples from its experience come from some generally unknown probability distribution and the learner has to extract from them something more general, something about that distribution, that allows it to produce useful answers in new cases.
Machine learning, knowledge discovery in databases (KDD) and data mining These three terms are commonly confused, as they often employ the same methods and overlap strongly. They can be roughly separated as follows:Machine learning focuses on the prediction, based on known properties learned from the training data Data mining (which is the analysis step of Knowledge Discovery in Databases) focuses on the discovery of (previously) unknown properties on the data However, these two areas overlap in many ways: data mining uses many machine learning methods, but often with a slightly different goal in mind. On the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy.Also read Flashcard MachineMuch of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, the performance is usually evaluated with respect to the ability to reproduce known knowledge, while in KDD the key task is the discovery of previously unknown knowledge.
Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.Human interaction Some machine learning systems attempt to eliminate the need for human intuition in data analysis, while others adopt a collaborative approach between human and machine. Human intuition cannot, however, be entirely eliminated, since the system's designer must specify how the data is to be represented and what mechanisms will be used to search for a characterization of the data. Algorithm types Machine learning algorithms can be organized into a taxonomy based on the desired outcome of the algorithm.
Supervised learning generates a function that maps inputs to desired outputs (also called labels, because they are often provided by human experts labeling the training examples). For example, in a classification problem, the learner approximates a function mapping a vector into classes by looking at input-output examples of the function. Unsupervised learning models a set of inputs, like clustering. See also data mining and knowledge discovery.
Semi-supervised learning combines both labeled and unlabeled examples to generate an appropriate function or classifier.Reinforcement learning learns how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback in the form of rewards that guides the learning algorithm. Transduction tries to predict new outputs based on training inputs, training outputs, and test inputs. Learning to learn learns its own inductive bias based on previous experience.
Theory Main article: Computational learning theory The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. In addition to performance bounds, computational learning theorists study the time complexity and feasibility of learning.
In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time.Negative results show that certain classes cannot be learned in polynomial time. There are many similarities between machine learning theory and statistics, although they use different terms. Approaches Main article: List of machine learning algorithms Decision tree learning Main article: Decision tree learning Decision tree learning uses a decision tree as a predictive model which maps observations about an item to conclusions about the item's target value.
Association rule learning Main article: Association rule learningAssociation rule learning is a method for discovering interesting relations between variables in large databases. Artificial neural networks Main article: Artificial neural network An artificial neural network (ANN) learning algorithm, usually called "neural network" (NN), is a learning algorithm that is inspired by the structure and/or functional aspects of biological neural networks. Computations are structured in terms of an interconnected group of artificial neurons, processing information using a connectionist approach to computation.Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables. Genetic programming Main articles: Genetic programming and Evolutionary computation Genetic programming (GP) is an evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user-defined task.
It is a specialization of genetic algorithms (GA) where each individual is a computer program. It is a machine learning technique used to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task. Inductive logic programming Main article: Inductive logic programming Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for examples, background knowledge, and hypotheses.
Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program which entails all the positive and none of the negative examples. Support vector machines Main article: Support vector machines Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.Clustering Main article: Cluster analysis Cluster analysis or clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis.
Bayesian networks Main article: Bayesian network A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independencies via a directed acyclic graph (DAG).For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Reinforcement learning Main article: Reinforcement learning Reinforcement learning is concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward.Reinforcement learning algorithms attempt to find a policy that maps states of the world to the actions the agent ought to take in those states.
Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Representation learning Several learning algorithms, mostly unsupervised learning algorithms, aim at discovering better representations of the inputs provided during training. Classical examples include principal components analysis and cluster analysis.Representation learning algorithms often attempt to preserve the information in their input but transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions, allowing to reconstruct the inputs coming from the unknown data generating distribution, while not being necessarily faithful for configurations that are implausible under that distribution.
Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional.Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse (has many zeros). Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data. [3] Sparse Dictionary LearningIn the learning area, sparse dictionary learning is one of the most popular methods,[citation needed] and has gained a huge success in lots of applications. [citation needed] In sparse dictionary learning, a datum is represented as a linear combination of basis functions, and the coefficients are assumed to be sparse.
Let x be a d-dimensional datum, D be a d by n matrix, where each column of D represent a basis function. r is the coefficient to represent x using D. Mathematically, sparse dictionary learning means the following where r is sparse.Generally speaking, n is assumed to be larger than d to allow the freedom for a sparse representation.
Sparse dictionary learning has been applied in different context. In classification, the problem is to determine whether a new data belongs to which classes. Suppose we already build a dictionary for each class, then a new data is associate to the class such that it is best sparsely represented by the corresponding dictionary. People also applied sparse dictionary learning in image denoising. The key idea is that clean image path can be sparsely represented by a image dictionary, but the noise cannot.User can refer to [4] if interested.
Applications Applications for machine learning include: machine perception computer vision natural language processing syntactic pattern recognition search engines medical diagnosis bioinformatics brain-machine interfaces cheminformatics Detecting credit card fraud stock market analysis Classifying DNA sequences Sequence mining speech and handwriting recognition object recognition in computer vision game playing software engineering adaptive websites robot locomotion computational finance structural health monitoring. Sentiment Analysis (or Opinion Mining).In 2006, the on-line movie company Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and beat its existing Netflix movie recommendation system by at least 10%. The AT Research Team BellKor beat out several other teams with their machine learning program "Pragmatic Chaos". After winning several minor prizes, it won the grand prize competition in 2009 for $1 million. [5] Software RapidMiner, KNIME, Weka, ODM, Shogun toolbox, Orange, Apache Mahout, scikit-learn, mlpy are software suites containing a variety of machine learning algorithms.
Journals and conferencesMachine Learning (journal) Journal of Machine Learning Research Neural Computation (journal) Journal of Intelligent Systems(journal) International Conference on Machine Learning (ICML) (conference) Neural Information Processing Systems (NIPS) (conference) See also Adaptive control Cache language model Computational intelligence Computational neuroscience Cognitive science Data mining Explanation-based learning Important publications in machine learning Multi-label classification Pattern recognition Predictive analytics References ^ * Mitchell, T. (1997). Machine Learning, McGraw Hill. ISBN 0-07-042807-7, p. . ^ Christopher M.
Bishop (2006) Pattern Recognition and Machine Learning, Springer ISBN 0-387-31073-8. ^ Yoshua Bengio (2009). Learning Deep Architectures for AI. Now Publishers Inc.. p.
1–3. ISBN 9781601982940. ^ Aharon, M, M Elad, and A Bruckstein. 2006. “K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. ” Signal Processing, IEEE Transactions on 54 (11): 4311-4322 ^ "BelKor Home Page" research.
att. com Further reading Sergios Theodoridis, Konstantinos Koutroumbas (2009) "Pattern Recognition", 4th Edition, Academic Press, ISBN 978-1-59749-272-0. Ethem Alpayd? (2004) Introduction to Machine Learning (Adaptive Computation and Machine Learning), MIT Press, ISBN 0-262-01211-1 Bing Liu (2007), Web Data Mining: Exploring Hyperlinks, Contents and Usage Data. Springer, ISBN 3-540-37881-2 Toby Segaran, Programming Collective Intelligence, O'Reilly ISBN 0-596-52932-5 Ray Solomonoff, "An Inductive Inference Machine" A privately circulated report from the 1956 Dartmouth Summer Research Conference on AI.
Ray Solomonoff, An Inductive Inference Machine, IRE Convention Record, Section on Information Theory, Part 2, pp. , 56-62, 1957. Ryszard S. Michalski, Jaime G.Carbonell, Tom M. Mitchell (1983), Machine Learning: An Artificial Intelligence Approach, Tioga Publishing Company, ISBN 0-935382-05-4.
Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1986), Machine Learning: An Artificial Intelligence Approach, Volume II, Morgan Kaufmann, ISBN 0-934613-00-1.
Yves Kodratoff, Ryszard S. Michalski (1990), Machine Learning: An Artificial Intelligence Approach, Volume III, Morgan Kaufmann, ISBN 1-55860-119-8. Ryszard S. Michalski, George Tecuci (1994), Machine Learning: A Multistrategy Approach, Volume IV, Morgan Kaufmann, ISBN 1-55860-251-8. Bishop, C. M.
(1995).Neural Networks for Pattern Recognition, Oxford University Press. ISBN 0-19-853864-2. Richard O. Duda, Peter E. Hart, David G.
Stork (2001) Pattern classification (2nd edition), Wiley, New York, ISBN 0-471-05669-3. Huang T. -M. , Kecman V. , Kopriva I.
(2006), Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning, Springer-Verlag, Berlin, Heidelberg, 260 pp. 96 illus. , Hardcover, ISBN 3-540-31681-7. KECMAN Vojislav (2001), Learning and Soft Computing, Support Vector Machines, Neural Networks and Fuzzy Logic Models, The MIT Press, Cambridge, MA, 608 pp.
, 268 illus. , ISBN 0-262-11255-8.MacKay, D. J. C.
(2003). Information Theory, Inference, and Learning Algorithms, Cambridge University Press. ISBN 0-521-64298-1. Ian H. Witten and Eibe Frank Data Mining: Practical machine learning tools and techniques Morgan Kaufmann ISBN 0-12-088407-0.
Sholom Weiss and Casimir Kulikowski (1991). Computer Systems That Learn, Morgan Kaufmann. ISBN 1-55860-065-5. Mierswa, Ingo and Wurst, Michael and Klinkenberg, Ralf and Scholz, Martin and Euler, Timm: YALE: Rapid Prototyping for Complex Data Mining Tasks, in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-06), 2006.Trevor Hastie, Robert Tibshirani and Jerome Friedman (2001). The Elements of Statistical Learning, Springer.
ISBN 0-387-95284-5. Vladimir Vapnik (1998). Statistical Learning Theory. Wiley-Interscience, ISBN 0-471-03003-1. External links International Machine Learning Society There is a popular online course by Andrew Ng, at ml-class.
org. It uses GNU Octave. The course is a free version of Stanford University's actual course, whose lectures are also available for free. Machine Learning Video Lectures