Aim
Deep learning is a study of non-linear multi-layer neural network structures and training algorithms. Deep learning emerged as a discipline from the breakthrough research by G. Hinton published in 2006 [1]. The purpose of deep (multi-layer) networks is to extract features from the training data and learn hierarchical representations of the data [2]. Hierarchical representations are more robust and reusable than the classifier rules obtained through shallow learning. Deep neural networks are also more biologically plausible [3]. To date, deep learning achieved impressive results in such application areas as speech recognition and image recognition, significantly outperforming shallow learning methods [4], [5].Most existing deep learning techniques use the backpropagation algorithm first introduced by P. Werbos in 1974 [6]. A well-known problem of this method applied to deep learning is the vanishing of the gradients at the backpropagation step of the algorithm [7]. Many nature-inspired algorithms such as particle swarm optimisation and evolutionary optimisation are independent of the gradient information. This quality gives nature-inspired algorithms a potential advantage over backpropagation in the deep learning paradigm.
Nature-inspired algorithms have been successfully applied to neural network training in the past [8], [9], [10]. However, applications of the nature-inspired methods to deep learning are still very limited. Training deep neural networks is a challenging task due to the inherent high dimensionality. The aim of this special session is to discuss the existing nature-inspired approaches to deep learning, to identify problems that arise, and to encourage research in this new and exciting field of computational intelligence.
Scope
The topics of the special session include, but are not limited to:- Applications of evolutionary algorithms to deep learning
- Applications of swarm-based algorithms to deep learning
- Hybrid approaches to deep learning
- Theoretical and empirical analysis of the nature-inspired deep learning algorithms
- Identifying and understanding the limitations of nature-inspired deep learning
- Real-world applications of nature-inspired deep learning
- Training Restricted Boltzmann Machines with nature-inspired algorithms
- Training Deep Belief Networks with nature-inspired algorithms
- Training autoencoders and stacked autoencoders with nature-inspired algorithms
- Training convolutional neural networks with nature-inspired algorithms
- Weight pretraining with nature-inspired algorithms
- High-performance implementations of nature-inspired deep learning
- Analysis of overfitting and generalization of deep networks training using nature-inspired algorithms
- Training of deep networks in dynamic environments
Paper Submission
Procedure for paper formatting and submission should be followed as specified on the CEC’2015 website. Special session papers will be treated in the same way as regular papers, and will be included in the conference proceedings.CEC'2015 website: http://sites.ieee.org/cec2015/
Organizers
Prof Andries EngelbrechtDepartment of Computer Science
University of Pretoria
Pretoria, South Africa
Email: engel@cs.up.ac.za
Prof Andries Engelbrecht received the Masters and PhD degrees in Computer Science from the University of Stellenbosch, South Africa, in 1994 and 1999 respectively. He is a Professor in Computer Science at the University of Pretoria, and serves as Head of the department. He also holds the position of South African Research Chair in Artificial Intelligence, and leads the Computational Intelligence Research Group at the University of Pretoria, consisting of 40 Masters and PhD students. His research interests include swarm intelligence, evolutionary computation, artificial neural networks, artificial immune systems, and the application of these Computational Intelligence paradigms to data mining, games, bioinformatics, finance, and difficult optimization problems. He has published over 220 papers in these fields in journals and international conference proceedings, and is the author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence. Prof Engelbrecht annually serves as reviewer for over 20 journals and 10 conferences. He is an Associate Editor of the IEEE Transactions on Evolutionary Computation, Journal of Swarm Intelligence, IEEE Transactions on Computational intelligence and AI in Games, and Applied Soft Computing. Additionally, he serves on the editorial board of three other international journals, and was co-guest editor of special issues of the IEEE Transactions on Evolutionary Computation and the Journal of Swarm Intelligence. He served on the international program committee and organizing committee of a number of conferences, organized special sessions, presented tutorials, and took part in panel discussions. He was the founding chair of the South African chapter of the IEEE Computational Intelligence Society. He is a member of the Evolutionary Computation Technical Committee, Games Technical Committee, and the Evolutionary Computation in Dynamic and Uncertain Environments Task Force.
Ms Anna Rakitianskaia
Department of Computer Science
University of Pretoria
Pretoria, South Africa
Email: annar@cs.up.ac.za
Ms Anna Rakitianskaia received the Masters degree in Computer Science in 2012. She is a lecturer of Computer Science at the University of Pretoria. She is also pursuing a PhD degree in Computer Science, and her current research interests lie in neural networks, swarm intelligence, dynamic environments, time-series analysis, high-dimensional optimisation, and deep learning. She is especially interested in applying particle swarm optimisation to neural networks, as well as applying computational intelligence methods in real-life scenarios. Ms Rakitianskaia has published 6 papers in the field, and has also been invited to review both conference and journal papers since 2008. She is a student member of the IEEE and CIS.
References
[1] G. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006.[2] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798–1828, 2013.
[3] M. Riesenhuber and T. Poggio, “Hierarchical models of object recognition in cortex,” Nature neuroscience, vol. 2, no. 11, pp. 1019–1025, 1999.
[4] D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber, “Deep, big, simple neural nets for handwritten digit recognition,” Neural computation, vol. 22, no. 12, pp. 3207–3220, 2010.
[5] D. Ciresan, U. Meier, J. Masci, and J. Schmidhuber, “Multi-column deep neural network for traffic sign classification,” Neural Networks, vol. 32, pp. 333–338, 2012.
[6] P. J. Werbos, “Beyond regression: New tools for prediction and analysis in the behavioural sciences,” PhD thesis, Harvard University, Boston, USA, 1974.
[7] S. Hochreiter, “The vanishing gradient problem during learning recurrent neural nets and problem solutions,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 6, no. 02, pp. 107–116, 1998.
[8] F. Van Den Bergh and A. P. Engelbrecht, “Cooperative learning in neural networks using particle swarm optimizers,” South African Computer Journal, vol. 26, pp. 84–90, 2000.
[9] E. A. Grimaldi, F. Grimaccia, M. Mussetta, and R. E. Zich, “PSO as an effective learning algorithm for neural network applications,” in Proceeding of the 3rd International Conference on Computational Electromagnetics and Its Applications (ICCEA). IEEE, 2004, pp. 557–560.
[10] K. W. Chau, “Application of a PSO-based neural network in analysis of outcomes of construction claims,” Automation in Construction, vol. 16, no. 5, pp. 642–646, 2007.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.