
In unsupervised ensemble learning one obtains the predictions of multiple experts or classifiers over a large set of unlabeled instances. As there is no labeled data, it is not possible to directly assess the reliability of the classifiers, which is a-priori unknown. Common tasks are to estimate the accuracies of the different experts, and to combine their possibly conflicting predictions into an accurate meta-learner.
​
We have so far published three papers on unsupervised ensemble learning.
-
The first is based on a model referred to as conditional independence, where we assume that the set of classifiers make independent mistakes:
Additional details and code
Paper: Estimating the accuracy of multiple classifiers without labeled data (AISTATS, 2015)
-
In the second publication we develop a way to detect, without labeled data clssificers which are strongly dependent, that is, classifiers that tend to make similar mistakes.
Paper: Unsupervised ensemble learning with dependent classifiers (AISTATS, 2016)
-
In our third publication we make use of deep learning tools for unsupervised ensemble learning.
Paper: A deep learning approach to unsupervised ensemble learning (ICML, 2016)
Unsupervise ensemlbe learning

