IntroductionMachine learning, with the aim of building intelligent systems by learning model or knowledge from data, has achieved great progress in the past 30 years. However, a huge gap of learning ability still exists between machine learning and human learning.
For example, a five year old child can identify objects, understand speech and language via learning from small number of instances or daily communication, whereas machines can hardly match this ability even by learning from big data. In recent years, some researchers have attempted to develop machine learning methods simulating the human learning behavior. Such methods, called as "Humanlike Learning", have some features: learning from small supervised data, interactive, alltime incremental (lifelong), exploiting contexts and the correlation between different data sources and tasks, etc. Some existing learning methods, such as incremental learning, active learning, transfer learning, domain adaptation, learning with use, multitask learning, zeroshot/oneshot learning, can be viewed as special/simplified forms of humanlike learning. The future trend is to make learning methods more flexible and active, requiring less supervision, exploiting all kinds of data more adequately.
TopicThe topics of interest include, but are not limited to:
- Brain inspired neural networks
- Human like learning for deep models
- Hybrid supervised and unsupervised learning
- Learning from interaction
- Learning with use
- Zero/Oneshot learning
- Advanced transfer learning and adaptation
- Advanced multitask learning
- Learning from heterogeneous data
- Human like learning for pattern recognition, computer vision, robotics and other applications
Important DatesSubmission: January 15th, 2016
Special Session ChairsCheng Lin Liu Research Center for Brain inspired Intelligence, Institute of Automation, Chinese Academy of Sciences
Zhaoxiang Zhang Research Center for Brain inspired Intelligence, Institute of Automation, Chinese Academy of Sciences