Kernel Conditional Graphical Model
We address the general classification problem, in which the labels are an $L$-dimensional vector with $q$ possible values in each entry. The discriminative algorithms for solving this problem encode the dependencies among the labels in a graph to reduce its complexity. We present a unifying framework that allows us to compare these algorithms. In the related literature, most papers are difficult to follow, because their notation is not simple and can be misleading. Hence, our unifying framework is a main contribution of this paper. We will propose a new algorithm for this problem, which can be trained independently per clique. Given that the cliques are responsible for the complete decision, we can train them using all the discriminative information in the training examples. As the training is done independently per clique, we will be able to apply it to any graphical model and deal with large training datasets.