Clustering: Type
Unsupervised learning, class type clusterning
Clustering: Definition
Methods to assign a set of objects into groups.
Objects in a cluster are more similar to each other than to those in other clusters. Enables understanding of the differences as well as the similarities within the data.
Clustering: Preference Bias
Prefers data that is in groupings given some form of distance (Euclidean, Manhattan, or others)
Clustering: Restriction Bias
No restriction
Clustering: Flavors
Ward hierarchical clustering, k-means, Gaussian Mixture Models, spectral, Birch, Affinity propogation, fuzzy clustering
Clustering: K-Means
For a given K, finds K clusters by iteratively moving cluster centers to the cluster centers of gravity and adjusting the cluster set assignments.
K-Nearest Neighbors: Type
Supervised learning, instance based
K-Nearest Neighbors: Definition
K-NN is an algorithm that can be used when you have a objects that have been classified or labeled and other similar objects that haven't been classified or labeled yet, and you want a way to automatically label them.
K-Nearest Neighbors: Pros
1: Simple; 2: Powerful; 3: Lazy, no training involved; 4: Naturally handles multiclass classification and regression
K-Nearest Neighbors: Cons
1: Performs poorly on high-dimensionality datasets; 2: Expensive and slow to predict new instances; 3: Must define a meaningful distance function;
K-Nearest Neighbors: Preference Bias
Good for measuring distance based approximations, good for outlier detection
K-Nearest Neighbors: Restriction Bias
Low-dimensional datasets
K-Nearest Neighbors: Example Applications
1: Computer security: intrusion detection; 2: Fault detection in semiconducter manufacturing; 3: Video content retrieval; 4: Gene expression
Decision Trees: Definition
Each node in the tree tests a single attribute, decisions are represented by the edges leading away from that node with leaf nodes representing the final decision.
Decision Trees: Pros
1: Fast; 2: Robust to noise and missing values; 3: Accurate
Decision Trees: Cons
1: Complex trees are hard to interpret; 2: Duplication within the same sub-tree is possible
Decision Trees: Best Uses
Decision Trees: Restriction Bias
Decision Trees: Example Applications
1: Star classification; 2: Medical diagnosis; 3: Credit risk analysis
Decision Trees: Flavors
CART, ID3
Hidden Markov Models: Type
Supervised or unsupervised with class: Markovian
Hidden Markov Models: Definition
Markov models are a kind of probabilistic model often used in language modeling. The observations are assumed to follow a Markov chain, where each observation is independent of all past observations given the previous one.
Hidden Markov Models: Pros
Markov chains are useful models of many natural processes and the basis of powerful techniques in probabilistic inference and randomized algorithms.
Hidden Markov Models: Cons
Hidden Markov Models: Preference Bias
Generally works well for system information where the Markov assumption holds
Hidden Markov Models: Restriction Bias
Prefers time series data and memoryless information
Hidden Markov Models: Example Applications
Temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, and bioinformatics.
Hidden Markov Models: Flavors
Markov chains, Hidden Markov Models
Linear Regression: Type
Supervised learning, regression class
Linear Regression: Definition
Trying to fit a linear continuous function to the data to predict results.
Can be univariate or multivariate.
Linear Regression: Pros
1: Very fast - runs in constant time, 2: Easy to understand the model, 3: Less prone to overfitting
Linear Regression: Cons
1: Unable to model complex relationships, 2: Unable to capture nonlinear relationships without first transforming the inputs
Linear Regression: Preference Bias
1: Prefers continuous variables; 2: A first look at a dataset; 3: Numerical data with lots of features
Linear Regression: Restriction Bias
Low restriction on problems it can solve
Linear Regression: Example Applications
1: Fitting a line
Naive Bayes: Type
Supervised learning; used for classification; probabalistic approach
Naive Bayes: Definition
Given its simplicity and the assumption that the independent variables are statistically independent, Naive Bayes models are effective classification tools that are easy to use and interpret. Naive Bayes is particularly appropriate when the dimensionality of the independent space is high. For the reasons given above, Naive Bayes can often outperform other more sophisticated classification methods. A variety of methods exist for modeling the conditional distributions of the inputs including normal, lognormal, gamma, and Poisson.
Naive Bayes: Pros
1: Easy to use and interpret; 2: Works well with high dimensional problems
Naive Bayes: Preference Bias
Works on problems where the inputs are independent from each other
Naive Bayes: Restriction Bias
Prefers problems where the probability will always be greater than zero for each class
Naive Bayes: Example Applications
Naive Bayes: Flavors
A variety of methods exist for modeling the conditional distributions of the inputs including normal, lognormal, gamma, and Poisson.
Neural Networks: Type
Supervised learning; nonlinear functional approximation
Neural Networks: Definition
With experience, networks can learn, as feedback strengthens or inhibits connections that produce certain results. Each layer depends on the calculations done on the layer before it.
Neural Networks: Pros
1: Extremely powerful, can model even very complex relationships;
2: No need to understand the underlying data;
3: Almost works by "magic"
Neural Networks: Cons
1: Prone to overfitting;
2: Long training time;
3: Requires significant computing power for large datasets;
4: Model is essentially unreadable;
5: Work best with "homogenous" data where features all have similar meanings
Neural Networks: Preference Bias
Prefers binary inputs
Neural Networks: Restriction Bias
Little restriction bias
Neural Networks: Example Applications
1: Images; 2: Video; 3: "Human-intelligence" type tasks like driving or flying; 4: Robotics
Neural Networks: Flavors
Deep learning
Support Vector Machines: Type
Supervised learning for defining a decision boundary
Support Vector Machines: Definition
Divides an instance space by finding the line that is as far as possible from both classes. This line is called the "maximum-margin hyperplane".
Only the points near the hyperplane are important. These points near the boundary are called the support vectors.
Support Vector Machines: Pros
1: Can model complex, nonlinear relationships; 2: Robust to noise (because they maximize margins)
Support Vector Machines: Cons
1: Need to select a good kernel function; 2: Model parameters are difficult to interpret; 3: Sometimes numerical stability problems; 4: Requires significant memory and processing power
Support Vector Machines: Preference Bias
Works where there is a definite distinction between two classifications
Support Vector Machines: Restriction Bias
Prefers binary classification problems
Support Vector Machines: Example Applications
1: Text classification; 2: Image classification; 3: Handwriting recognition