s'authentifier
version française rss feed
HAL : inria-00287354, version 1

Voir la fiche concise  BibTeX,EndNote,...
Utility-based Reinforcement Learning for Reactive Grids
Perez J., Germain-Renaud C., Kégl B., Loomis C.
The 5th IEEE International Conference on Autonomic Computing, Chicago : États-Unis (2008) - http://hal.inria.fr/inria-00287354
Informatique/Intelligence artificielle
Utility-based Reinforcement Learning for Reactive Grids
Julien Perez ()1, Cécile Germain-Renaud1, Balázs Kégl ()1, 2, 3, C. Loomis2
1 :  LRI - Laboratoire de Recherche en Informatique
http://www.lri.fr/
CNRS : UMR8623 – Université Paris Sud
LRI - Bâtiments 650-660 Université Paris-Sud 91405 Orsay Cedex
France
2 :  LAL - Laboratoire de l'Accélérateur Linéaire
http://www.lal.in2p3.fr/
CNRS : UMR8607 – IN2P3 – Université Paris XI - Paris Sud
Centre Scientifique d'Orsay B.P. 34 91898 ORSAY Cedex
France
3 :  INRIA Saclay - Ile de France - TAO
http://tao.lri.fr/tiki-index.php
INRIA – CNRS : UMR8623 – Université Paris XI - Paris Sud
DIGITEO Bat. Claude Shannon - Université de Paris-Sud, Bâtiment 660, 91190 Gif-sur-Yvette
France
Large scale production grids are an important case for autonomic computing. They follow a mutualization paradigm: decision-making (human or automatic) is distributed and largely independent, and, at the same time, it must implement the highlevel goals of the grid management. This paper deals with the scheduling problem with two partially conflicting goals: fairshare and Quality of Service (QoS). Fair sharing is a wellknown issue motivated by return on investment for participating institutions. Differentiated QoS has emerged as an important and unexpected requirement in the current usage of production grids. In the framework of the EGEE grid (one of the largest existing grids), applications from diverse scientific communities require a pseudo-interactive response time. More generally, seamless integration of the grid power into everyday use calls for unplanned and interactive access to grid resources, which defines reactive grids. The major result of this paper is that the combination of utility functions and reinforcement learning (RL) provides a general and efficient method for dynamically allocating grid resources in order to satisfy both end users with differentiated requirements and participating institutions. Combining RL methods and utility functions for resource allocation was pioneered by Tesauro and Vengerov. While the application contexts are different, the resource allocation issues are very similar. The main difference in our work is that we consider a multi-criteria optimization problem that includes a fair-share objective. A first contribution of our work is the definition of a set of variables describing states and actions that allows us to formulate the grid scheduling problem as a continuous action-state space reinforcement learning problem. To capture the immediate goals of end users and the long-term objectives of administrators, we propose automatically derived utility functions. Finally, our experimental results on a synthetic workload and a real EGEE trace show that RL clearly outperforms the classical schedulers, so it is a realistic alternative to empirical scheduler design.
Anglais

Communications avec actes
2008
internationale
The 5th IEEE International Conference on Autonomic Computing
Chicago
États-Unis
05/2008

Liste des fichiers attachés à ce document :
PDF
RLICAC08.pdf(701.9 KB)