Wednesday, 8 August 2018

CFP: IEEE CIM Special Issue on Deep Reinforcement Learning and Games (Oct 1)

AIMS AND SCOPE

Recently, there has been tremendous progress in artificial intelligence (AI) and computational intelligence (CI) and games. In 2015, Google DeepMind published a paper “Human-level control through deep reinforcement learning” in Nature, showing the power of AI&CI in learning to play Atari video games directly from the screen capture. Furthermore, in Nature 2016, it published a cover paper “Mastering the game of Go with deep neural networks and tree search” and proposed the computer Go program, AlphaGo. In March 2016, AlphaGo beat the world’s top Go player Lee Sedol by 4:1. In early 2017, the Master, a variant of AlphaGo, won 60 matches against top Go players. In late 2017, AlphaGo Zero learned only from self-play and was able to beat the original AlphaGo without any losses (Nature 2017). This becomes a new milestone in the AI&CI history, the core of which is the algorithm of deep reinforcement learning (DRL). Moreover, the achievements on DRL and games are manifest. In 2017, the AIs beat the expert in Texas Hold’em poker (Science 2017). OpenAI developed an AI to outperform the champion in the 1V1 Dota 2 game. Facebook released a huge database of StarCraft I. Blizzard and DeepMind turned StarCraft II into an AI research lab with a more open interface. In these games, DRL also plays an important role.
The theoretical analysis of DRL, e. g., the convergence, stability, and optimality, is still in early days. Learning efficiency needs to be improved by proposing new algorithms or combining with other methods. DRL algorithms still need to be demonstrated in more diverse practical settings. Specific topics of interest include but are not limited to:
  • Survey on DRL and games;
  • New AI&CI algorithms in games;
  • Learning forward models from experience;
  • New algorithms of DL, RL and DRL;
  • Theoretical foundation of DL, RL and DRL;
  • DRL combined with search algorithms or other learning methods;
  • Challenges of AI&CI games as limitations in strategy learning, etc.;
  • DRL or AI&CI Games based applications in realistic and complicated systems.

IMPORTANT DATES

Submission Deadline: October 1st, 2018
Notification of Review Results: December 10th, 2018
Submission of Revised Manuscripts: January 31st, 2019
Submission of Final Manuscript: March 15th, 2019
Special Issue Publication: August 2019 Issue

GUEST EDITORS

D. Zhao, Institute of Automation, Chinese Academy of Sciences, China, Dongbin.zhao@ia.ac.cn
S. Lucas, Queen Mary University of London, UK, simon.lucas@qmul.ac.uk
J. Togelius, New York University, USA, julian.togelius@nyu.edu.
SUBMISSION INSTRUCTIONS
  1. The IEEE CIM requires all prospective authors to submit their manuscripts in electronic format, as a PDF file. The maximum length for Papers is typically 20 double-spaced typed pages with 12-point font, including figures and references. Submitted manuscript must be typewritten in English in single column format. Authors of Papers should specify on the first page of their submitted manuscript up to 5 keywords. Additional information about submission guidelines and information for authors is provided at the IEEE CIM website. Submission will be made via https://easychair.org/conferences/?conf=ieeecimcitbb2018.
  2. Send also an email to guest editor D. Zhao (dongbin.zhao@ia.ac.cn) with subject “IEEE CIM special issue submission” to notify about your submission.
  3. Early submissions are welcome. We will start the review process as soon as we receive your contribution.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.