s'authentifier
version française rss feed
HAL : in2p3-00580580, version 1

Fiche détaillée  Récupérer au format
OPT 2009: 2nd NIPS Workshop on Optimization for Machine Learning, Whistler : Canada (2009)
Sampling-based optimization with mixtures
R. Bardenet1, B. Kégl1, 2, 3
(12/12/2009)

Sampling-based Evolutionary Algorithms (EA) are of great use when dealing with a highly non-convex and/or noisy optimization task, which is the kind of task we often have to solve in Machine Learning. Two derivative-free examples of such methods are Estimation of Distribution Algorithms (EDA) and techniques based on the Cross-Entropy Method (CEM). One of the main problems these algorithms have to solve is finding a good surrogate model for the normalized target function, that is, a model which has sufficient complexity to fit this target function, but which keeps the computations simple enough. Gaussian mixture models have been applied in practice with great success, but most of these approaches lacked a solid theoretical founding. In this paper we describe a sound mathematical justification for Gaussian mixture surrogate models, more precisely we propose a proper derivation of an EDA/CEM algorithm with mixture updates using Expectation Maximization techniques. It will appear that this algorithm resembles the recent Population MCMC schemes, thus reinforcing the link between Monte- Carlo integration methods and sampling-based optimization. We will concentrate throughout this paper on continuous optimization.
1 :  LAL - Laboratoire de l'Accélérateur Linéaire
2 :  INRIA Saclay - Ile de France - TAO
3 :  LRI - Laboratoire de Recherche en Informatique
Informatique/Performance et fiabilité

Informatique/Algorithme et structure de données
Liste des fichiers attachés à ce document : 
PDF
OPT2009-Bardenet.pdf(320.7 KB)