## AbstractWe study an extension of the ``standard'' learning models to settings where observing the value of an attribute has an associated cost (which might be different for different attributes). Our model assumes that the correct classification is given by some target function $f$ from a class of functions $\cal F$; most of our results discuss the ability to learn a clause (an OR function of a subset of the variables) in various settings: \myparagraph{Offline:} We are given both the function $f$ and the distribution $D$ that is used to generate an input $x$. The goal is to design a strategy to decide what attribute of $x$ to observe next so as to minimize the expected evaluation cost of $f(x)$. (In this setting there is no ``learning'' to be done but only an optimization problem to be solved; this problem \ignoreCJ{turns out}{is} to be NP-hard and hence approximation algorithms are presented.) \myparagraph{Distributional online:} We study two types of ``learning'' problems; one where the target function $f$ is known to the learner but the distribution $D$ is unknown (and the goal is to minimize the expected cost including the cost that stems from ``learning'' $D$), and the other where $f$ is unknown (except that $f\in{\cal F}$) but $D$ is known (and the goal is to minimize the expected cost while limiting the prediction error involved in ``learning'' $f$). \myparagraph{Adversarial online:} We are given $f$, however the inputs are selected \ignoreCJ{by an adversary} adversarially. The goal is to compare the learner's cost to that of the best fixed evaluation order (i.e., we analyze the learner's performance by a competitive analysis).
[Edit] |