# K means Clustering

This is a first type of algorithm in unsupervised data.

This is similar to KNN which assumes that similar observations are grouped together are ** distance- based algorithms** with only difference being that in KNN we check that labels of K nearest neighbors and then decide corresponding label of our point while in K means we find that why K nearest neighbors are similar to the point in consideration…

**Steps for Algorithm**

- Specify the desired number of clusters K
- Randomly assign each data point to a cluster
- Compute cluster centroids(mean of all points within the cluster)
- Re-assign each point to the closest cluster centroid
- Re-compute cluster centroids
- Repeat steps 4 and 5 until no improvements are possible : Similarly, we’ll repeat the 4th and 5th steps until we’ll reach global optima. When there will be no further switching of data points between two clusters for two successive repeats or the centroid stops moving and becomes static, it will mark the termination of the algorithm if not explicitly mentioned.

**Ways to determine K**

Although no one can pick this with certainty unless it is directed to analyst by stakeholders, we still methods directing to the best to use k.

These methods include direct methods and statistical testing methods:

- Direct methods: consists of optimizing a criterion, such as the within cluster sums of squares or the average silhouette. The corresponding methods are named
*elbow*and*silhouette*methods, respectively. - Statistical testing methods: consists of comparing evidence against null hypothesis. An example is the
*gap statistic*.

*Elbow Method*

- Compute clustering algorithm (e.g., k-means clustering) for different values of k. For instance, by varying k from 1 to 10 clusters.
- For each k, calculate the total within-cluster sum of square (wss).

- Plot the curve of wss according to the number of clusters k.
- The location of a bend (knee) in the plot is generally considered as an indicator of the appropriate number of clusters.

*Average Silhouette method*

- Compute clustering algorithm (e.g., k-means clustering) for different values of k. For instance, by varying k from 1 to 10 clusters.
- For each k, calculate the average silhouette of observations (
*avg*.*sil*). - Plot the curve of
*avg*.*sil*according to the number of clusters k. - The location of the maximum is considered as the appropriate number of clusters.

*Gap Statistics Method*

- Cluster the observed data, varying the number of clusters from k = 1, …,
*kmax*, and compute the corresponding total within intra-cluster variation*Wk*. - Generate B reference data sets with a random uniform distribution. Cluster each of these reference data sets with varying number of clusters k = 1, …,
*kmax*, and compute the corresponding total within intra-cluster variation*Wkb*. - Compute the estimated gap statistic as the deviation of the observed
*Wk*value from its expected value*Wkb*under the null hypothesis:

Compute also the standard deviation of the statistics.

4. Choose the number of clusters as the smallest value of k such that the gap statistic is within one standard deviation of the gap at k+1:

**Points to note:**

- Now if you want to visualize the clusters, it is only possible when you select any 2 independent variables out of your multiple variables present, to do so try performing PCA (Principal Component analysis) which will give you PC1, AND PC2 as axis in visualizing. This is so because first two principal components will cover large amount of variation present in your data.
- K means is very sensitive to outliers. (PAM is less sensitive to outliers which uses optimal average silhouette width)
- Although it can easily be run on large datasets, it is sensitive to initial random selection of cluster centers. Similarly it varies with ordering of data, since initializing is random. (Try reordering and validating the results)

**Resources:**