1. Amazon EMR
Amazon Elastic MapReduce (Amazon EMR), it distributes the computational work across clusters of virtual servers running in the Amazon cloud using Hadoop
- It users a distributed processing architecture called MapReduce in which a task is mapped to a set of servers for processing.
- The results of the computation performed by those servers is then reduced down to a single output set.
- One node, designated as the master node, controls the distribution of tasks.
1. Measures the Cluster Validity
- Numerical measures that are applied to judge various aspects of cluster validity, are classified into the following three types
- External Index: Used to measure the extent to which cluster labels match externally supplied class labels
- Internal Index: Used to measure the goodness of a clustering structure without respect to the external information
- sum of squared error (SSE)
- Relative Index: used to compare two different clustrings of clusters
- often an external or internal index is used for this function, e.g., SSE or entropy
2. Measuring Cluster Validity via Correlation
- Two matrix
- Proximity Matrix
- Incidence Matrix
- one row and one column for each data point
- an entry is 1 if the associated pair of points belong to the same cluster, else 0
- Compute the correlation between the two matrices
- since the matrices are symmetric, only the correlation between n(n-1)/2 entries needs to be calculated
- High correlation indicates that points that belong the same cluster are closed to each other
- Not a good measure for some density
1. Hierarchical Clustering
- Produces a set of nested clusters
- organized as a hierarchical tree
- Can be visualized as a dendrogram
- a tree like diagram that records the split
2. Strengths of Hierarchical Clustering
- Do not have to assume any particular number of clusters
- any desired number of clusters can be obtained by “cutting” the dendogram at the proper level
- They may correspond to meaning taxonomies
- example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, …)
3. Two main types of Hierarchical Clustering
- Start with the points as individual clusters
- At each step, merge the closest pair of clusters until only one cluster (or k clusters) left.
- Start with one, all-inclusive cluster
- At each step, split a cluster until each cluster contains a point (or there are k clusters)
4. Inter-Cluster Similarity
- strength: can handle non-elliptical shapes
- weakness: sensitive to noise and outliers
- strength: less susceptible to noise and outliers
- weakness: tend to break large clusters; biased towards globular clusters
- Group Average
- proximity of two clusters is the average of pairwise similarity between points in the two clsuters
- Strongness: less susceptible to noise and outliers
- Weakness: biased towards globular clusters
- Distance between centroids
- Other methods driven by an objective function
- Ward’s method users squared error
- Similarity of two clusters is based on the increase in squared error when two clusters are merged
- Similar to gorup average if distance between points is distance squared.
- Strongness: less susceptible to noise and outliers
- Biased towards globular clusters
- Hierarchical analogue of K-means
5. Hierarchical Clustering: Problems and Limitations
- Once a decision is made to combine two clusters, it cannot be undone
- No objective function is directly minimized
- Different schemes have problems with one or more of the following
- sensitivity to noise and outliers
- difficulty handling different sized clusters and convex shapes
- breaking large clusters
6. MST: Divisive Hierarchical Clustering
- Build MST (Minimum Spanning Tree)
- start with a tree that consists of any point
- in successive steps, look for the closest pair of points (p,q) such that one point (p) is in the current tree but the other (q) is not
1. Partitional Clustering v.s. Hierarchical Clustering
- Partitioning clustering: A division of data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset.
- Hierarchical clustering: A set of nested clusters organized as a hierarchical tree.
2. Types of Cluster
- Well-Separated Clusters: a cluster is a set of points such that any point in a cluster is closer (or more similar) to every other point in the cluster than any point not in the cluster.
- Center-based Clusters: A cluster is a set of objects such that any point in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster.
- The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster.
- Contiguous Clusters (Nearest Neighbor or Transitive): A cluster is a set of points such that a point in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster.
- Density-based: A cluster is a dense region of points, which is separated by low-density regions, from other regions of high density.
- Used when cluster are irregular or intertwined.
- Shared Property or Conceptual Clusters: Finds clusters that share some common property or represent a particular concept.
- Cluster Defined by an Objective Function:
- Finds clusters that minimize or maximize an objective function
- Enumerate all possible ways of dividing points into clusters and evaluate the “goodness” of each potential set of clusters by using the given objective function (NP hard).
- Can have global or local objectives
- hierarchical clustering algorithms typically have local objectives
- partitional algorithms typically have global objectives
- A variant of the global objective function approach is to fit the data to a parameterized model
- parameters for the model are determined from the data
- mixture models assume that the data is a “mixture” of a number of statistical distributions.
- Map the clustering problem to a different domain and solve a related problem in that domain
- proximity matrix defines a weighted graph, where the nodes are the points being clustered, and the weighted edges represent the proximities between points.
- Clustering is equivalent to breaking the graph into connected components, one for each cluster.
3. K-means Clustering
- Initial centroids are often chosen randomly.
- Clusters produced vary from one run to another.
- The centroid is (typically) the mean of the points in the cluster.
- “Closeness” is measured by Euclidean distance, cosine similarity, correlation, etc.
- K-means will converge for common similarity measures mentioned above.
- Most of the converge happens in the few iterations.
- often the stopping condition is changed to “until relatively few points change clusters”
- Complexity is O(n*k*i*d)
- n: number of points
- k: number of clusters
- i: number of iterations
- d: number of attributes
4. Evaluating K-means Clustering
- If there are K “real” clusters then the chance of selecting one centroid from each cluster is small
- Chances is relatively small when K is large.
- If clusters are the same size, n, then
- P =
- Solutions to Initial Centroids Problem
- Multiple runs: helps, but probability is not on your side
- Sample and user hierarchical clustering to determine initial centroids
- Select more than k initial centroids and then select among these initial centroids
- select most widely separated
- Bisecting K-means
- Not as susceptible to initialization issues
6. Handling Empty Clusters
- Basic K-means algorithm can yield empty clusters
- Several strategies
- Choose the point that contributes most to SSE
- Choose a point from the cluster with the highest SSE
- If there are several empty clusters, the above can be repeated several times
7. Updating Centers Incrementally
- In the basic K-means algorithm, centroids are updated after all points are assigned to a centriod
- An alternative is to update the centriods after each assignment (incremental approach)
- each assignment updates zero or two centroids
- more expensive
- introduces an order dependency
- never get an empty cluster
- can use “weights” to change the impact
8. Pre-processing and Post-processing
- normalize the data
- eliminate outliers
- Eliminate small clusters that may represent outliers
- Split “loose” clusters, i.e., clusters with relatively high SSE
- Merge clusters that are “close” and that have relatively low SSE
- Can use these steps during the clustering process
9. Bisecting K-means
- Bisecting K-means algorithm
- variant of K-means that can produce a partitional or a hierarchical clustering
10. Limitations of K-means
- K-means has problems when clusters are of differing
- non-globular shapes
- K-means has problems when the data contains outliers.
- One solution is to use many clusters
- find parts of clusters, but need to put together