2007 International Symposium on Neural Networks

June 3-7, 2007, Mandarin Garden Hotel, Nanjing, China.
http://www.acae.cuhk.edu.hk/~isnn2007 or http://liu.ece.uic.edu/ISNN07


The approved Special Sessions are listed as follows: 
1. Combination of Security Engineering Issues with Neural Network Organizer: Dr. Tai-hoon Kim (taihoonn@empal.com), Ewha Womans Unversity Scope & Topics: We welcome all papers describing new and original results in application of security engineering issues to Neural Network. Topics of interest will focus on: - Application of security engineering to NN applications or systems; - Application of security engineering to NN systems development processes and operational environments; - Security Testing and Evaluation of NN systems; - Other application of security engineering to NN systems. Selected papers in this session will be included in the Special Issue of * Journal of Security Engineering * International Journal of Security and Its Applications * International Journal of Hybrid Information Technology
2 Particle filtering (CONDENSATION) Algorithms and their Applications

Organizer:: Prof. Yang Weon Lee (ywlee@honam.ac.kr)  

This Invited Session aims to discuss bring together researcher interested from difference fields like as, the particle filters, with an opportunity to exchange information and new ideas, and to discuss new developments in the different fields. It is visualized that areas of interaction will be further explored and common applications will be considered with focus on the "CONDENSATION algorithms and their applications".

Description: Many modern signal processing problems involve systems that are nonlinear and nonstationary. Data-driven models that are based on powerful function approximation methods such as neural networks have been applied with demonstrable success to these problems. Nonstationarity imposes a particular difficulty in these settings because regularization techniques such as cross validation can be inapplicable. In this session, we will discuss what particle filters are, and present how they are applicable in the various fields.

The Special Session will focus on

-Applications of Particle filtering in communications, vision systems and tracking systems

Filter Optimization

-Particle Filters for Mobile robots

-Hybrid Particle filter

-Sequential Inference with Particle Filters  
3. Associative Memories and applications.

Organizers: Dr. Cornelio Yáñez-Márquez (cyanez@cic.ipn.mx), Itzamá López-Yáñez (ilopezb05@sagitario.cic.ipn.mx), Center for Computing Research, México

Associative memories have been an active area for research in computer sciences by roughly half a century. In this respect, computer scientists are interested in developing mathematical models that behave as similar as possible to associative memories and, based on the former models, create, design and operate systems that are able to learn and recall patterns (the two phases of an associative memory model) representing objects, living organisms, concepts or abstract ideas. In order to do so, we need to represent these objects or ideas as patterns, which is usually achieved by using column vectors of finite dimension with real, rational, integer or Boolean values. The ultimate goal of an associative memory is to correctly recall complete patterns from input patterns. These patterns might be altered with additive, subtractive or mixed noise. The first known mathematical model of an associative memory is Steinbuch’s Lernmatrix, developed by Karl Steinbuch in 1961. In the following years, many efforts were made: by the late 1960s the Correlograph appeared, and in the early 1970s Anderson and Kohonen, working independently, developed the Linear Associator. The next important effort was produced by Hopfield, who developed an associative memory model based on some physical systems behavior. Hopfield’s model is simultaneously an associative memory and a neural network. Therefore, his work caused a revival of the interest of computer scientists in neural networks, an area that had been dormant for 13 years. In the late 1990s morphological associative memories were developed by Ritter et al., surpassing the learning and pattern recall capabilities offered by previous models. Morphological associative memories are based in Mathematical Morphology. In 2002, a more efficient model of associative memories arose at the Center for Computing Research. Alpha-Beta associative memories were inspired on morphological associative memories and are based on two new operators, alpha and beta, hence the name of the model. Until this day, the Alpha-Beta model has become the most efficient and robust model, which has been applied to several noteworthy problems, such as industrial color equalization, image compression, and concept lattices.

4. The analysis for the performance parameters and quality control of the production online .

Organizers: Liu Xintian (xintian@sjtu.edu.cn)1, School of Mechanical Engineering , Huang Hu (huangh@sues.edu.cn), 2 College of Automobile Engineering, Shanghai University of Engineering Science


    * performance parameters of the products are chosen during the tentative course of the production.
    * the standard ANFIS simulation model for some speed sensor are founded
    * hypothesis method the performance parameters
 
5. Intelligent Semiconductor Design and Manufacturing

Organizers:Tae Seon Kim ((tkim@catholic.ac.kr), Catholic University of Korea, Yean-Der Kuan (ydkuan@tsint.edu.tw) Science and Technology Institute of Northern Taiwan.

Recently, the use of neural networks for modeling, optimization, and control of semiconductor manufacturing processes has becoming very popular and yielded very impressive results. However, neural networks based intelligent semiconductor manufacturing technologies are not matured yet since they are still at early stage level. For this reason, practical deployment of intelligent semiconductor manufacturing technologies is not yet achieved. The objective of this special session is to share various states of the art intelligent semiconductor manufacturing technologies with other researchers. Also, it can act as a catalyst for practical implementation of developed technologies.

 
 Topics of interest:

    * Equipment/Process Modeling;
    * Device Design, Modeling & Analysis;
    * Process Optimization;
    * Process Control;
    * Process Diagnosis;
    * Chip Test & Measurement Techniques;
    * Thermal Management;
    * Yield Modeling & Reliability Analysis;
    * Production Planning and Job Schedul
6. Recent Development and Futuristic Trends in Machine Learning

Organizer: Prof. Er Meng Joo (EMJER@ntu.edu.sg), Nanyang Technological University, Singapore

Over the last decaded, many researchers have developed novel machine learning algorithms with the objective of realizing human-like intelligence ultimately. It is observed that the algorithms developed centre around four technologies, namely Neural Networks, Fuzzy Logic, Genetic Algorithms and Evolutional Computation. The purpose of this special session is to ssemble some of the latest research results in the area of Machine Learning. It is hoped that this special session will help to provide a forum for sceintists, researchers, engineers and other technical professionnel to have a vibrant discussion on the recent development and futuristic trend in Machine Learning.

7. Extreme Learning Machine

Organizer: Prof. Guang-Bin Huang (EGBHuang@ntu.edu.sg), Nanyang Technological University, Singapore, Prof Meng-Hiot Lim (EMHLIM@ntu.edu.sg), Nanyang Technological University, Singapore

It is clear that the learning speed of neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: 1) the slow gradient-based learning algorithms are extensively used to train neural networks, and 2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Although Support Vector Machine can produce better generalization performance, it faces two problems as well: 1) the intensive computation involved in its training which is at least quadratic with respect to the number of training examples; 2) large network size generated for large complex applications. In addition, some trivial works such as manually tuning parameters have to be done by users in the applications of these two technologies. A new emergent technology called extreme learning machine (ELM) which in theory tends to provide good generalization performance at extremely fast learning speed can overcome these problems easily. ELM can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for neural networks and support vector machines. More and more researchers have been conducting ELM related research. This session will provide a good platform for researchers to share their ideas and new results in this emergent area.

8. Application of neural fuzzy network in industrial process

Organizer: Prof. Xiangjie Liu (liuxj@ncepu.edu.cn), North China Electric Power University, China

Most industrial processes are complex systems characterized by nonlinearity, uncertainty and load disturbance. These features exist widely in chemical process, power generation, mining industry and even aerospace. Using a neural network to learn the plant model from operational process data is one solution. Since neural networks are well known for their abilities to approximate any nonlinear functions with arbitrary accuracy and to learn from experimental data, they can be used to model and control the nonlinear industrial process. However, neural networks are black boxes, as it is difficult to interpret the input-output relationship from the networks. In contrast, neurofuzzy networks(NFNs) are transparent, as they are derived from fuzzy logic. Therefore, expert knowledge in linguistic form can be incorporated into the network through the design of the fuzzy rules. This feature is extremely useful to incorporate the knowledge of experienced operators into the network.