Regression for Large Datasets using an Ensemble of GPU-accelerated ELMs
Mark van Heeswijk, Yoan Miche, Erkki Oja and Amaury Lendasse
In: Large-Scale Machine Learning: Parallelism and Massive Datasets (NIPS 2009 Workshop), Friday December 11th, Whistler, Canada (NIPS 2009 Workshop).
In this paper is presented an approach that allows for performing regression on
large data sets in reasonable time. The main component of the approach consists
of speeding up the slowest operation of the used algorithm by running it on the
Graphics Processing Unit (GPU) of the video card, instead of the processor (CPU).
The experiments show a speedup of an order of magnitude by using the GPU,
and competitive performance on the regression task. Furthermore, the presented
approach lends itself for further parallelization, that has still to be investigated.