Abstract:
Learning of classifiers to be used as filters within the analytical reasoning
process leads to new and aggravates existing challenges. Such classifiers are
typically trained ad-hoc, with tight time constraints that affect the amount
and the quality of annotation data and, thus, also the users' trust in the
classifier trained. We approach the challenges of ad-hoc training by
inter-active learning, which extends active learning by integrating human
experts' background knowledge to greater extent. In contrast to active
learning, not only does inter-active learning include the users' expertise by
posing queries of data instances for labeling, but it also supports the users
in comprehending the classifier model by visualization. Besides the
annotation of manually or automatically selected data instances, users are
empowered to directly adjust complex classifier models. Therefore, our model
visualization facilitates the detection and correction of inconsistencies
between the classifier model trained by examples and the user's mental model
of the class definition. Visual feedback of the training process helps the
users assess the performance of the classifier and, thus, build up trust in
the filter created. We demonstrate the capabilities of inter-active learning
in the domain of video visual analytics and compare its performance with the
results of random sampling and uncertainty sampling of training sets.