Sunday, 16 November 2014

Call for Papers CEC2015 Special Session "Combining Evolutionary Computation and Reinforcement Learning"

Special Session for IEEE CEC 2015.

Evolutionary Computation (EC) and Reinforcement Learning (RL) are two research fields in the area of search, optimization, and control.  RL addresses sequential decision making problems in initially unknown stochastic environments, involving stochastic policies and unknown temporal delays between actions and observable effects. EC studies algorithms that can optimize some fitness function by searching for the optimal set of parameter values. RL can quite easily cope with stochastic environments, which is more complex with traditional EC methods. The main strengths of EC techniques are their general applicability to solving many different kinds of optimization problems and their global search behavior enabling these methods not to get easily trapped in local optima. There also exist EC methods that deal with adaptive control problems such as classifier systems and evolutionary reinforcement learning. Such methods address basically the same problem as in RL, i.e. the maximization of the agent's reward in a potentially unknown environment that is not always completely observable. Still, the approach taken by these methods are different and complementary. RL is used for learning the parameters of a single model using a fixed representation of the knowledge and learns to improve its value function from the reward given after every step taken in an environment. EC is usually a population based optimizer that uses a fitness function to rank individuals based on their total performance in the environment and uses different operators to guide the search. These two research fields can benefit from an exchange of ideas resulting in a better theoretical understanding and/or empirical efficiency.

Aim and scope

The main goal of this special session is to solicit research on frontiers and potential synergies between evolutionary computation and reinforcement learning. We encourage submissions describing applications of EC for optimizing agents in difficult environments that are possibly dynamic, uncertain and partially observable, like in games, multi-agent applications such as scheduling, and other real-world applications. Ideally, this special session will gather research papers with a background in either RL or EC that propose new challenges and ideas as result of synergies between RL and EC.



Topics of interests

We enthusiastically solicit papers on relevant topics such as:
  • Novel frameworks including both evolutionary algorithms and RL
  • Comparisons between RL and EC approaches to optimize the behavior of agents in specific environments
  • Parameter optimization of EC methods using RL or vice versa
  • Adaptive search operator selection using reinforcement learning
  • Optimization algorithms such as meta-heuristics, evolutionary algorithms for dynamic and uncertain environments
  • Theoretical results on the learnability in dynamic and uncertain environments
  • On-line self-adapting systems or automatic configuration systems
  • Solving multi-objective sequential decision making problems with EC/RL
  • Learning in multi-agent systems using hybrids between EC and RL
  • Learning to play games using optimization techniques
  • Real-world applications in engineering, business, computer science, biological sciences, scientific computation, etc. in dynamic and uncertain environments solved with evolutionary algorithms
  • Solving dynamic scheduling and planning problems with EC and/or RL

Organizers

Madalina M. Drugan (mdrugan@vub.ac.be)  Artificial Intelligence Lab, Vrije Universiteit Brussel, Pleinlaan 2, 1050, Brussels, Belgium

Bernard Manderick (Bernard.Manderick@vub.ac.be) Artificial Intelligence Lab, Vrije Universiteit Brussel, Pleinlaan 2, 1050, Brussels, Belgium

Marco A. Wiering (m.a.wiering@rug.nl) Institute of Artificial Intelligence and Cognitive Engineering, University of Groningen, Nijenborgh 9, 9700AK Groningen, The Netherlands

No comments:

Post a Comment

Note: only a member of this blog may post a comment.