Hierarchical Clustering Algorithm
(Redirected from Hierarchical clustering)
Jump to navigation
Jump to search
A Hierarchical Clustering Algorithm is a clustering algorithm that can be implemented into a hierarchical clustering system (to solve a hierarchical clustering task).
- AKA: Hierarchic Clustering Algorithm.
- Context:
- It can range from being a Agglomerative Hierarchical Clustering Algorithm to being a Divisive Hierarchical Clustering Algorithm.
- It can be implemented by Hierarchical Clustering System to solve a Hierarchical Clustering Task.
- Example(s):
- Counter-Example(s):
- See: Soft-Edges Clustering Algorithm, Greedy Algorithm, Dendrogram, Hierarchy.
References
2019
- (Johnson, 2019) ⇒ Reid Johnson (2019) "Hierarchical Clustering" Retrieved: 2017-05-17.
- QUOTE: Hierarchical vs. Partitional Clustering
- A distinction among different types of clusterings is whether the set of clusters is nested or unnested.
- A partitional clustering a simply a division of the set of data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset.
- A hierarchical clustering is a set of nested clusters that are organized as a tree.
- QUOTE: Hierarchical vs. Partitional Clustering
2016
- (Saket J & Pandya, 2016) ⇒ Swarndeep Saket J, and Dr Sharnil Pandya. (2016). “An Overview of Partitioning Algorithms in Clustering Techniques.” In: International Journal of Advanced Research in Computer Engineering & Technology (IJARCET), 5(6). ISSN:2278 -1323
- QUOTE: Hierarchical clustering method seeks to build a tree-based hierarchical taxonomy from a set of unlabeled data. This grouping process is represented in the form of dendrogram. It can be analyzed with the help of statistical method. There are two types of hierarchical clustering methods. They are 1) Agglomerative Hierarchical Clustering and 2) Divisive Clustering [7]. In the agglomerative approach which is also known as ‘bottom up approach’, Hierarchical algorithms always result into what is called ‘nested set of partitions’. They are called hierarchical because of their structure they represent about the dataset. Divisive and Agglomerative strategies are two important strategies of hierarchical clustering. In case of divisive approach, popularly known as ‘top down approach’, ‘all data points are considered as a single cluster and splitted into a number of clusters based on certain criteria [8]. Examples for such algorithm are BRICH (Balance Iterative Reducing and Clustering using Hierarchies), CURE (Cluster Using Representatives). The most important weakness of hierarchical clustering technique is that it does not scale properly because of time complexity. In addition to this, it is difficult to alter ones the process of analysis has already started.
2014
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/hierarchical_clustering Retrieved:2014-11-21.
- In data mining, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types[1]:
- Agglomerative: This is a "bottom up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.
- Divisive: This is a "top down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.
- In data mining, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types[1]:
- In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering are usually presented in a dendrogram.
In the general case, the complexity of agglomerative clustering is [math]\displaystyle{ O(n^3) }[/math], which makes them too slow for large data sets. Divisive clustering with an exhaustive search is [math]\displaystyle{ O(2^n) }[/math], which is even worse. However, for some special cases, optimal efficient agglomerative methods (of complexity [math]\displaystyle{ O(n^2) }[/math]) are known: SLINK[2] for single-linkage and CLINK[3] for complete-linkage clustering.
- In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering are usually presented in a dendrogram.
- ↑ Rokach, Lior, and Oded Maimon. “Clustering methods." Data mining and knowledge discovery handbook. Springer US, 2005. 321-352.
- ↑ R. Sibson (1973). "SLINK: an optimally efficient algorithm for the single-link cluster method" (PDF). The Computer Journal. British Computer Society. 16 (1): 30–34. doi:10.1093/comjnl/16.1.30.
- ↑ D. Defays (1977). "An efficient algorithm for a complete-link method". The Computer Journal. British Computer Society. 20 (4): 364–366. doi:10.1093/comjnl/20.4.364.
2008
- (Manning et al., 2008) ⇒ Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze. (2008). "Hierarchical clustering". In: "Introduction to Information Retrieval".
- QUOTE: Hierarchical clustering (or hierarchic clustering) outputs a hierarchy, a structure that is more informative than the unstructured set of clusters returned by flat clustering [1].Hierarchical clustering does not require us to prespecify the number of clusters and most hierarchical algorithms that have been used in IR are deterministic. These advantages of hierarchical clustering come at the cost of lower efficiency. The most common hierarchical clustering algorithms have a complexity that is at least quadratic in the number of documents compared to the linear complexity of K-means and EM.
- ↑ In this chapter, we only consider hierarchies that are binary trees like the one shown in Figure 17.1 - but hierarchical clustering can be easily extended to other types of trees.