EPrints submitted by Olivier Teytaud
Number of EPrints submitted by this user: 37
Application de l'apprentissage par renforcement a la gestion du risque
Controle du risque multiple en selection de regles d'association
Evaluation et validation des règles d'association
Local and global Order 3/2 convergence of a Surrogate Evolutionary Algorithm.
A statistical learning approach to bloat and universal consistency in genetic programming
quasi-random mutations for Evolution Strategies
Convergence proofs and convergence rates for multi-modal multi-objective optimization
Statistical asymptotic and non-asymptotic consistency of bayesian networks : convergence to the right structure and consistent probability estimates
Inductive-deductive systems : a learning theory point of view
Taylor-based pseudo-metrics for random process fitting in dynamic programming
Estimation et controle des performances en généralisation des réseaux de neurones
Optimal Estimation for Large-Eddy Simulation of Turbulence and application to the analysis of subgrid models
How entropy-theorems can show that approximating high-dim Pareto-fronts is too hard.
Statistical inference and data mining: false discoveries control
Quasi-Random Resamplings, with aplications to rule Extraction, Cross-Validation and (su-)bagging
Continuous lunches are free
Conditioning, halting criteria and choosing lambda
Slightly beyond Turing’s computability for
studying genetic programming
DCMA, yet another derandomization in covariance matrix adaptation
Association rule interestingness:
measure and statistical validation
Comparison-based algorithms are robust and randomized algorithms are anytime
On the ultimate convergence rates for isotropic
algorithms and the best choices among various
forms of isotropy
ACTIVE LEARNING IN REGRESSION, WITH APPLICATION TO
STOCHASTIC DYNAMIC PROGRAMMING
Anytime many-armed bandits
On the adaptation of noise level for stochastic optimization
NONLINEAR PROGRAMMING IN APPROXIMATE DYNAMIC
PROGRAMMING: BANG-BANG SOLUTIONS,
STOCK-MANAGEMENT AND UNSMOOTH PENALTIES
Boosting Active Learning to Optimality: A tractable Monte-Carlo based Approach.
Continuous lunches are free plus the design of
optimal optimization algorithms
When does quasi-random work ?
A Statistical Learning Perspective of Genetic Programming
Creating an Upper-Conﬁdence-Tree program for
THE PARALLELIZATION OF MONTE-CARLO PLANNING
On the parallel speed-up of Estimation of
Multivariate Normal Algorithm and Evolution
Optimizing Low-Discrepancy Sequences with an
The computational intelligence of MoGo revealed in Taiwan's computer-Go tournaments.
Why one must use reweighting in Estimation Of
comparison-based algorithms are robust and randomized algorithms are anytime.