## AbstractIn order to better fit a variety of pattern recognition problems over strings, using a normalised version of the edit or Levenshtein distance is considered to be an appropriate approach. The goal of the normalisation is to take into account the lengths of the strings. But a challenging question is to define a normalisation process that at the same time maintains the desired mathematical properties (and specifically the triangular inequality), is meaningful, and can be computed in an economical way. We define a new normalisation, contextual, where each edit operation is divided by the length of the string on which the edit operation takes place (more precisely, on the length of the longest of the two strings involved). We prove that this contextual edit distance is a metric and that it can be computed through an extension of the usual dynamic programming algorithm for the edit distance. We show over several experiments that the distance can be computed fast, obtains good results in classification tasks and has a low intrinsic dimension in comparison with other normalised edit distances.
[Edit] |