ScopeReliable regression/classification models in the field of evolutionary model induction revolve around the fundamental property of generalisation. This ensures that the induced model is a concise approximation of a data-generating process and performs correctly when presented with data that has not been utilised during the learning process. For practical applications of evolutionary model induction one have to deal with input spaces of high dimensionality comprising many input variables. The problem of many input variables is that we would need a large quantity of training data in order to ensure that complex models, necessary to capture complex data dependencies, are fit reliably to the data. Although the curse of dimensionality certainly raises important issues, it has not prevented EC researchers and practitioners to design successful model induction methods. Real data will often be confined to a region of the space having lower effective dimensionality, and in particular the directions over which important variations in the target variables occur may be very confined. This special session addresses the fundamental problem of dimensionality reduction, and its effect on generalisation, in evolutionary model induction techniques.
Topics Covered:The major interest is on applications including, but not limited to:
1. Feature selection methods.
2. Feature construction methods (i.e. via linear/nonlinear transformations of original features).
3. Regularisation methods.
4. Data sampling methods.
5. Ensemble methods.
6. Theory of generalisation.
7. Improvements EC’s generalisation.
Co-chairsDr. Ahmed Kattan
School of Computer Engineering, Um Al Qura University, Saudi Arabia
Dr. Alexandros Agapitos
Complex and Adaptive Systems Laboratory of University College Dublin (Ireland)
Home page: http://www.fmc-cluster.org/index.php?option=com_content&view=article&id=114%3Aalexandros-agapitos&catid=55&Itemid=72