version française rss feed
HAL : hal-00491560, version 1

Fiche détaillée  Récupérer au format
Journal of Grid Computing 8, 3 (2010) 473-492
Multi-objective reinforcement learning for responsive grids
Julien Perez1, Cecile Germain-Renaud1, 2, Balázs Kégl1, 2, 3, Charles Loomis3
Grid Observatory Collaboration(s)

Grids organize resource sharing, a fundamental requirement of large scientific collaborations. Seamless integration of grids into everyday use requires responsiveness, which can be provided by elastic Clouds, in the Infrastructure as a Service (IaaS) paradigm. This paper proposes a model-free resource provisioning strategy supporting both requirements. Provisioning is modeled as a continuous action-state space, multi-objective reinforcement learning (RL) problem, under realistic hypotheses; simple utility functions capture the high level goals of users, administrators, and shareholders. The model-free approach falls under the general program of autonomic computing, where the incremental learning of the value function associated with the RL model provides the so-called feedback loop. The RL model includes an approximation of the value function through an Echo State Network. Experimental validation on a real data-set from the EGEE grid shows that introducing a moderate level of elasticity is critical to ensure a high level of user satisfaction.
1 :  LRI - Laboratoire de Recherche en Informatique
2 :  INRIA Saclay - Ile de France - TAO
3 :  LAL - Laboratoire de l'Accélérateur Linéaire
Informatique/Calcul parallèle, distribué et partagé


Informatique/Modélisation et simulation
Liste des fichiers attachés à ce document : 
RLGrid_JGC09_V7.pdf(1 MB)