Silhouette (clustering)

Silhouette is a method of interpretation and validation of consistency within clusters of data. The technique provides a succinct graphical representation of how well each object has been classified.[1] It was proposed by Belgian statistician Peter Rousseeuw in 1987.

The silhouette value is a measure of how similar an object is to its own cluster (cohesion) compared to other clusters (separation). The silhouette ranges from −1 to 1, where a high value indicates that the object is well matched to its own cluster and poorly matched to neighboring clusters. If most objects have a high value, then the clustering configuration is appropriate. If many points have a low or negative value, then the clustering configuration may have too many or too few clusters. A clustering with an average silhouette width of over 0.7 is considered to be "strong", a value over 0.5 "reasonable" and over 0.25 "weak", but with increasing dimensionality of the data, it becomes difficult to achieve such high values because of the curse of dimensionality, as the distances become more similar.[2] The silhouette score is specialized for measuring cluster quality when the clusters are convex-shaped, and may not perform well if the data clusters have irregular shapes or are of varying sizes.[3] The silhouette can be calculated with any distance metric, such as the Euclidean distance or the Manhattan distance.

Definition

edit
 
A plot showing silhouette scores from three types of animals from the Zoo dataset as rendered by Orange data mining suite. At the bottom of the plot, silhouette identifies dolphin and porpoise as outliers in the group of mammals.

Assume the data have been clustered via any technique, such as k-medoids or k-means, into   clusters.

For data point   (data point   in the cluster  ), let

 

be the mean distance between   and all other data points in the same cluster, where   is the number of points belonging to cluster  , and   is the distance between data points   and   in the cluster   (we divide by   because we do not include the distance   in the sum). We can interpret   as a measure of how well   is assigned to its cluster (the smaller the value, the better the assignment).

We then define the mean dissimilarity of point   to some cluster   as the mean of the distance from   to all points in   (where  ).

For each data point  , we now define

 

to be the smallest (hence the   operator in the formula) mean distance of   to all points in any other cluster (i.e., in any cluster of which   is not a member). The cluster with this smallest mean dissimilarity is said to be the "neighboring cluster" of   because it is the next best fit cluster for point  .

We now define a silhouette (value) of one data point  

 , if  

and

 , if  

Which can be also written as:

 

From the above definition it is clear that

 

Note that   is not clearly defined for clusters with size = 1, in which case we set  . This choice is arbitrary, but neutral in the sense that it is at the midpoint of the bounds, -1 and 1.[1]

For   to be close to 1 we require  . As   is a measure of how dissimilar   is to its own cluster, a small value means it is well matched. Furthermore, a large   implies that   is badly matched to its neighbouring cluster. Thus an   close to 1 means that the data is appropriately clustered. If   is close to -1, then by the same logic we see that   would be more appropriate if it was clustered in its neighbouring cluster. An   near zero means that the datum is on the border of two natural clusters.

The mean   over all points of a cluster is a measure of how tightly grouped all the points in the cluster are. Thus the mean   over all data of the entire dataset is a measure of how appropriately the data have been clustered. If there are too many or too few clusters, as may occur when a poor choice of   is used in the clustering algorithm (e.g., k-means), some of the clusters will typically display much narrower silhouettes than the rest. Thus silhouette plots and means may be used to determine the natural number of clusters within a dataset. One can also increase the likelihood of the silhouette being maximized at the correct number of clusters by re-scaling the data using feature weights that are cluster specific.[4]

Kaufman et al. introduced the term silhouette coefficient for the maximum value of the mean   over all data of the entire dataset,[5] i.e.,

 

where   represents the mean   over all data of the entire dataset for a specific number of clusters  .

Simplified Silhouette and Medoid Silhouette

edit

Computing the silhouette coefficient needs all   pairwise distances, making this evaluation much more costly than clustering with k-means. For a clustering with centers   for each cluster  , we can use the following simplified Silhouette for each point   instead, which can be computed using only   distances:

  and  ,

which has the additional benefit that   is always defined, then define accordingly the simplified silhouette and simplified silhouette coefficient[6]

 
 .

If the cluster centers are medoids (as in k-medoids clustering) instead of arithmetic means (as in k-means clustering), this is also called the medoid-based silhouette[7] or medoid silhouette.[8]

If every object is assigned to the nearest medoid (as in k-medoids clustering), we know that  , and hence  .[8]

Silhouette Clustering

edit

Instead of using the average silhouette to evaluate a clustering obtained from, e.g., k-medoids or k-means, we can try to directly find a solution that maximizes the Silhouette. We do not have a closed form solution to maximize this, but it will usually be best to assign points to the nearest cluster as done by these methods. Van der Laan et al.[7] proposed to adapt the standard algorithm for k-medoids, PAM, for this purpose and call this algorithm PAMSIL:

  1. Choose initial medoids by using PAM
  2. Compute the average silhouette of this initial solution
  3. For each pair of a medoid m and a non-medoid x
    1. swap m and x
    2. compute the average silhouette of the resulting solution
    3. remember the best swap
    4. un-swap m and x for the next iteration
  4. Perform the best swap and return to 3, otherwise stop if no improvement was found.

The loop in step 3 is executed for   pairs, and involves computing the silhouette in  , hence this algorithm needs   time, where i is the number of iterations.

Because this is a fairly expensive operation, the authors propose to also use the medoid-based silhouette, and call the resulting algorithm PAMMEDSIL.[7] It needs   time.

Batool et al. propose a similar algorithm under the name OSil, and propose a CLARA-like sampling strategy for larger data sets, that solves the problem only for a sub-sample.[9]

By adopting recent improvements to the PAM algorithm, FastMSC reduces the runtime using the medoid silhouette to just  .[8]

By starting with a maximum number of clusters kmax and iteratively removing the worst center (in terms of a change in silhouette) and re-optimizing, the best (highest medoid silhouette) clustering can be automatically determined. As data structures can be reused, this reduces the computation cost substantially over repeatedly running the algorithm for different numbers of clusters.[10] This algorithm needs pairwise distances and is typically implemented with a pairwise distance matrix. The   memory requirement is the main limiting factor for applying this to very large data sets.

See also

edit

References

edit
  1. ^ a b Peter J. Rousseeuw (1987). "Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis". Computational and Applied Mathematics. 20: 53–65. doi:10.1016/0377-0427(87)90125-7.
  2. ^ Beyer, Kevin; Goldstein, Jonathan; Ramakrishnan, Raghu; Shaft, Uri (1999). "When is "nearest neighbor" meaningful?}". {{cite journal}}: Cite journal requires |journal= (help); Unknown parameter |book-title= ignored (help)
  3. ^ Monshizadeh, Mehrnoosh; Khatri, Vikramajeet; Kantola, Raimo; Yan, Zheng (2022-11-01). "A deep density based and self-determining clustering approach to label unknown traffic". Journal of Network and Computer Applications. 207: 103513. doi:10.1016/j.jnca.2022.103513. ISSN 1084-8045. However, both measures [silhouette coefficient and edge correlation] prefer convex-shaped clusters and cannot adapt to all cluster shapes produced by DBSCAN.
  4. ^ R.C. de Amorim, C. Hennig (2015). "Recovering the number of clusters in data sets with noise features using feature rescaling factors". Information Sciences. 324: 126–145. arXiv:1602.06989. doi:10.1016/j.ins.2015.06.039. S2CID 315803.
  5. ^ Leonard Kaufman; Peter J. Rousseeuw (1990). Finding groups in data : An introduction to cluster analysis. Hoboken, NJ: Wiley-Interscience. p. 87. doi:10.1002/9780470316801. ISBN 9780471878766.
  6. ^ Hruschka, E.R.; de Castro, L.N.; Campello, R.J.G.B. (2004). Evolutionary Algorithms for Clustering Gene-Expression Data. Fourth IEEE International Conference on Data Mining (ICDM'04). IEEE. pp. 403–406. doi:10.1109/ICDM.2004.10073.
  7. ^ a b c Van der Laan, Mark; Pollard, Katherine; Bryan, Jennifer (2003). "A new partitioning around medoids algorithm". Journal of Statistical Computation and Simulation. 73 (8): 575–584. doi:10.1080/0094965031000136012. ISSN 0094-9655. S2CID 17437463.
  8. ^ a b c Lenssen, Lars; Schubert, Erich (2022). Clustering by Direct Optimization of the Medoid Silhouette. International Conference on Similarity Search and Applications. pp. 190–204. arXiv:2209.12553. doi:10.1007/978-3-031-17849-8_15. Retrieved 2022-10-20.
  9. ^ Batool, Fatima; Hennig, Christian (2021). "Clustering with the Average Silhouette Width". Computational Statistics & Data Analysis. 158: 107190. arXiv:1910.11339. doi:10.1016/j.csda.2021.107190. S2CID 219260336.
  10. ^ Lenssen, Lars; Schubert, Erich (2024-02-01). "Medoid Silhouette clustering with automatic cluster number selection". Information Systems. 120: 102290. arXiv:2309.03751. doi:10.1016/j.is.2023.102290. ISSN 0306-4379.