Incremental Clustering: Introduction: Part I

Incremental Clustering: Introduction: Part I

Clustering is an important form of data analysis that is used in several domains, such as astronomy, zoology, clinical research. With the thorough upsurge in the quantity of data gathered in recent years, the use of clustering has prolonged even more, to applications such as personalization and targeted publicity. Clustering is now a core module of collaborating systems that gather information on millions of users daily. It is becoming unreasonable to save all related information in memory at the same time, usually demanding the evolution to incremental approaches.

I recommend you to read fundamentals of machine learning and information of incremental learning first before proceeding to this article.

Incremental systems accept data elements one at a time and classically use lesser amount of space than is desirable to save the whole data set. This grants a mainly motivating task for unsupervised learning, which unlike its supervised complement, also hurts from nonappearance of an exclusive goal truth. Detect that not all data holds an evocative clustering, and when an intrinsic structure happens, it need not be exclusive. As such, several users may be interested in very several partitions. Therefore, several clustering approaches notice distinct types of structure, usually yielding drastically different outcomes on the same data. Until now, differences in the input-output performance of clustering approaches have only been calculated in the batch setting.


To be suitable to the kind of cluster structure existing in the data, number of ideas of cluster ability have been projected. These ideas seizure the structure of the target clustering: the clustering looked-for the user for some applications. As such, ideas of cluster-ability enable the examination of clustering approaches by making it possible to officially determine whether an algorithm properly recuperates the anticipated partition. One sophisticated idea of cluster-ability, needs that each item be closer to data in its own cluster than to other points.

For ease, we will mention to clusterings that follow to this obligation as nice. clusterings are willingly noticed offline by classical batch algorithms. On the other hand, no incremental method can discover these partitions. Thus, batch algorithms are expressively stronger than incremental approaches in their capability to notice cluster structure.

It turns out that incremental approaches become a lot more influential when clustering problem is slightly modified: if, instead of inquiring for precisely the target partition, be satisfied with an improvement, that is, a partition each of whose clusters is confined within some target cluster. Indeed, in many applications, it is reasonable to permit additional clusters. Incremental approaches benefit from supplementary clusters in various ways.

Read more about unsupervised learning, kmeans algorithm and its incremental/online version in next part of the article

Please follow and like us:

4 thoughts on “Incremental Clustering: Introduction: Part I

Leave a Reply

Your email address will not be published. Required fields are marked *

Enjoy this blog? Please spread the word :)