default search action
ICNN 1996: Washington, DC, USA
- Proceedings of International Conference on Neural Networks (ICNN'96), Washington, DC, USA, June 3-6, 1996. IEEE 1996
Volume 1
- Josef Göppert, Wolfgang Rosenstiel:
Varying cooperation in SOM for improved function approximation. 1-6 - Roberto Horowitz, Luis Mejías Alvarez:
Self-organizing neural networks: convergence properties. 7-12 - Ravi Kothari, Kwabena Agyepong:
On lateral connections in feed-forward neural networks. 13-18 - Shong-Tun Li, Ernst L. Leiss:
Constructing stochastic networks via β-RBF networks. 19-24 - Lars Kai Hansen, Jan Larsen:
Unsupervised learning and generalization. 25-30 - Daniel T. Davis, Jeng-Neng Hwang:
Estimating the multivariate conditional density using relatively sparse training data pairs. 31-37 - Josef Göppert, Wolfgang Rosenstiel:
Regularized SOM-training: a solution to the topology-approximation dilemma? 38-43 - J. Michael Rozmus:
The density-tracking self-organizing map. 44-49 - Tim Draelos, Don R. Hush:
A constructive neural network algorithm for function approximation. 50-55 - Timo Honkela, Samuel Kaski, Krista Lagus, Teuvo Kohonen:
Exploration of full-text databases with self-organizing maps. 56-61 - Aapo Hyvärinen, Erkki Oja:
A neuron that learns to separate one signal from a mixture of independent sources. 62-67 - Mario Costa, Davide Palmisano, Eros Pasero:
Supervised estimation of random variables taking on values in finite, ordered sets. 68-73 - Konstantinos I. Diamantaras:
Robust principal component extracting neural networks. 74-77 - Peter J. Edwards, Alan F. Murray:
Modelling weight- and input-noise in MLP learning. 78-83 - Andrea De Pol, Georg Thimm, Emile Fiesler:
Sparse initial topologies for high order perceptrons. 84-89 - Naonori Ueda, Ryohei Nakano:
Generalization error of ensemble estimators. 90-95 - Sayandev Mukherjee, Terrence L. Fine:
Ensemble pruning algorithms for accelerated training. 96-101 - Fang Wang, Qi-Jun Zhang:
An adaptive and fully sparse training approach for multilayer perceptrons. 102-107 - Nageswara S. V. Rao:
Nearest neighbor rules PAC-approximate feedforward networks. 108-113 - Takashi Onoda:
Experimental analysis of generalization capability based on information criteria. 114-119 - Eduardo Bayro-Corrochano, Sven Buchholz, Gerald Sommer:
Selforganizing Clifford neural network. 120-125 - Fabio Ancona, Stefano Rovetta, Rodolfo Zunino:
A parallel approach to plastic neural gas. 126-130 - Yiu-ming Cheung, Helen Z. H. Lai, Lei Xu:
Application of adaptive RPCL-CLP with trading system to foreign exchange investment. 131-136 - Lucia Sardo, Josef Kittler:
Minimum complexity estimator for RBF networks architecture selection. 137-142 - Guoping Qiu, Alexander W. Booth:
Frequency sensitive Hebbian learning. 143-148 - Nicolas Pican:
An orthogonal delta weight estimator for MLP architectures. 149-154 - Lee A. Feldkamp, Gintaras V. Puskorius, P. C. Moore:
Adaptation from fixed weight dynamic networks. 155-160 - Bruce L. Digney:
Nested Q-learning of hierarchical control structures. 161-166 - Martin A. Riedmiller:
Application of sequential reinforcement learning to control dynamic systems. 167-172 - Randall S. Barton, David M. Himmelblau:
Identification and dynamic data rectification using state correcting recurrent neural networks. 173-177 - Philippe Thomas, Gérard Bloch:
From batch to recursive outlier-robust identification of non-linear dynamic systems with neural networks. 178-183 - Riccardo Carotenuto, Luisa Franchina, Moreno Coli:
Nonlinear system process prediction using neural networks. 184-189 - Yi Sun:
On reconstruction error of Kohonen self-organizing mapping. 190-195 - Dusko Katic, Srdjan Stankovic:
Fast learning algorithms for training of feedforward multilayer perceptrons based on extended Kalman filter. 196-201 - Amir Sarajedini, Paul M. Chau:
Casasent network density estimation. 202-206 - Harri Lappalainen:
Soft multiple winners for sparse feature extraction. 207-210 - Giuseppe Acciani, Ernesto Chiarantoni, M. Minenna, Francesco Vacca:
Multivariate data projection techniques based on a network of enhanced neural elements. 211-216 - Cesare Alippi:
Extending the FPE and the effective number of parameters to neural estimators. 217-222 - L. M. Patnaik, Hema Nair, Varghese Abraham, G. Raghavendra, Shishir Kumar Singh, Rajan Srinivasan, K. Ramchand:
Performance evaluation of neural network algorithms for multisensor data fusion in an airborne track while scan radar. 223-228 - Tuan A. Duong, Allen R. Stubberud, Taher Daud, Anil Thakoor:
Cascade error projection: a new learning algorithm. 229-234 - Adam Krzyiak, Heinrich Niemann:
On MISE convergence rates of radial basis functions networks. 235-240 - Siegfried Bös, Eng Siong Chng:
Using weight decay to optimize the generalization ability of a perceptron. 241-246 - Naihong Wei, Shiyuan Yang, Shibai Tong:
A modified learning algorithm for improving the fault tolerance of BP networks. 247-252 - Igor Vajda:
About perceptron realizations of Bayesian decisions. 253-257 - Jing Xiao, Zhanbo Chen, Jie Cheng:
Structure study of feedforward neural networks for approximation of highly nonlinear real-valued functions. 258-263 - Basabi Chakraborty, Yasuji Sawada:
Fractal connection structure: effect on generalization in supervised feed-forward networks. 264-269 - Christoph S. Herrmann, Frank Reine:
Considering adequacy in neural network learning. 270-275 - Goutam Chakraborty, Shoichi Noguchi:
Improving generalization of a well trained network. 276-281 - Chuan Wang, Hsiao-Chun Wu, José C. Príncipe:
Crosscorrelation estimation using teacher forcing Hebbian learning and its application. 282-287 - Rakesh Chitradurga:
A novel weight training methodology for a multi-layer feed-forward neural net. 288-293 - Kimmo Kiviluoto:
Topology preservation in self-organizing maps. 294-299 - Robert D. Brandt, Feng Lin:
Can supervised learning be achieved without explicit error back-propagation? 300-305 - Kazushi Ikeda, Lei Xu:
The probability distribution of parameters learned with the EM algorithm. 306-310 - Youmin Zhang, X. Rong Li, Zhiwei Zhu, Hongcai Zhang:
A new clustering and training method for radial basis function networks. 311-316 - Sorin Draghici:
Some enhancements of the constraint based decomposition training architecture. 317-322 - Jacques Ludik, Ian Cloete:
Bounds for hidden units of simple recurrent networks. 323-328 - Mance E. Harmon, Leemon C. Baird III:
Residual advantage learning applied to a differential game. 329-334 - Akira Hirabayashi, Hidemitsu Ogawa:
Admissibility of memorization learning with respect to projection learning in the presence of noise. 335-340 - Michael R. Berthold:
A probabilistic extension for the DDA algorithm. 341-346 - Christoph Goller, Andreas Küchler:
Learning task-dependent distributed representations by backpropagation through structure. 347-352 - Kotaro Hirasawa, Masanao Ohbayashi, Masaru Koga, Masaaki Harada:
Forward propagation universal learning network. 353-358 - Masood A. Badri:
Neural networks of combination of forecasts for data with long memory pattern. 359-364 - James A. Reggia, Eric L. Grundstrom, Rita Sloan Berndt:
Learning activation rules for associative networks. 365-370 - Steve Lawrence, Ah Chung Tsoi, C. Lee Giles:
Local minima and generalization. 371-376 - K. M. Ho, C. J. Wang:
Experiments on estimating random mapping. 377-380 - Mirta B. Gordon:
A convergence theorem for incremental learning with real-valued inputs. 381-386 - Michael S. Gelder:
Fuzzy logic adapted nodal training parameter. 387-391 - Altaf H. Khan, Roland G. Wilson:
Integer-weight approximation of continuous-weight multilayer feedforward nets. 392-397 - Shyh-Jier Huang, Ching-Lien Huang:
Improvement of classification accuracy by using enhanced query-based learning neural networks. 398-402 - Qiangfu Zhao:
On-line evolutionary learning of NN-MLP based on the attentional learning concept. 403-408 - Sin Chun Ng, S. H. Leung, Andrew Luk:
A generalized backpropagation algorithm for faster convergence. 409-413 - Davide Anguita, Sandro Ridella, Stefano Rovetta, Rodolfo Zunino:
Limiting the effects of weight errors in feedforward networks using interval arithmetic. 414-417 - Fabrice Rossi:
Second differentials in arbitrary feedforward neural networks. 418-423 - P. P. Raghu, B. Yegnanarayana:
Texture classification using a probabilistic neural network and constraint satisfaction model. 424-429 - Silvio S. Pacheco, Antonio G. Thomé:
SHAKE-A multi-criterion optimization scheme for neural network training. 430-435 - Kazuyuki Hara, Kenji Nakayama:
Selection of minimum training data for generalization and online training by multilayer neural networks. 436-441 - M. (Sasheei) Saseetharran:
Experiments that reveal the limitations of the small initial weights and the importance of the modified neural model. 442-447 - Damon A. Miller, Jacek M. Zurada, John H. Lilly:
Pruning via Dynamic Adaptation of the Forgetting Rate in Structural Learning. 448 - Gonçalo C. Marques, Luís B. Almeida:
An Objective Function for Independence. 453 - Srdjan Milenkovic, Zoran Obradovic, Vanco B. Litovski:
Annealing based dynamic learning in second-order neural networks. 458-463 - Eero Yli-Rantala, Tommi Ojala, Petri Vuorimaa:
Vector quantization of residual images using self-organizing map. 464-467 - Meng-Hock Fun, Martin T. Hagan:
Levenberg-Marquardt training for modular networks. 468-473 - Steve Lawrence, Ah Chung Tsoi, C. Lee Giles:
Correctness, efficiency, extendability and maintainability in neural network simulation. 474-479 - Ralf Der, Gerd Balzuweit, Michael Herrmann:
Constructing principal manifolds in sparse data sets by self-organizing maps with self-regulating neighborhood width. 480-483 - Michiel C. van Wezel, Joost N. Kok, Kaisa Sere:
Determining the number of dimensions underlying customer-choices with a competitive neural network. 484-489 - Jun Ji, George V. Meghabghab:
Algorithmic enhancements to a backpropagation interior point learning rule. 490-495 - Jea-Rong Tsai, Pau-Choo Chung, Chein-I Chang:
A sigmoidal radial basis function neural network for function approximation. 496-501 - Bruno Crespi, E. Omerti:
Associative memories based on networks of delay differential equations. 502-506 - H. S. Ng, K. P. Lam:
Neural network compensation of optimization circuit for minimax path problems. 507-512 - József Bíró, Zoltán Koronkai, Tibor Trón:
A noise annealing neural network for global optimization. 513-518 - Fabio Abbattista, Giovanni Di Gioia, Giuseppe Di Santo, Anna Maria Fanelli:
An associative memory based on the immune networks. 519-523 - Dan Ventura, Tony R. Martinez:
Robust optimization using training set evolution. 524-528 - Satoshi Matsuda:
Distribution of asymptotically stable states in Hopfield network for TSP. 529-531 - Dilip Sarkar:
A three-stage architecture for bidirectional associative memory. 531/4-531/9 - Homayoun Valafar, Okan K. Ersoy, Faramarz Valafar:
Distributed global optimization (DGO). 531/10-536 - Dijin Gong, Mitsuo Gen, Genji Yamazaki, Weixuan Xu:
A modified ANN for convex programming with linear constraints. 537-542 - Alessandro Sperduti, Antonina Starita:
A memory model based on LRAAM for associative access of structures. 543-548 - Xinhua Zhuang, Hongchi Shi, Yunxin Zhao:
A general auto-associative memory model. 549-554 - Motonobu Hattori, Masafumi Hagiwara:
Intersection learning for bidirectional associative memory. 555-560 - William J. Wolfe, Richard M. Ulmer:
Orthogonal projections and the assignment problem. 561-564 - Hideki Asai, Takeshi Nakayama, Hiroshi Ninomiya:
Tiling algorithm with fitting violation function for analog neural array. 565-570 - Thomas L. Hemminger, Carlos A. Pomalaza-Raez:
Determining the minimum number of transmissions in multicast packet radio networks. 571-576 - Aleksa J. Zejak, Miroslav L. Dukic, Vladimir Smiljakovic, Zoran Golubicic:
Optimisation neural networks by mismatched filter model. 577-582 - Mohamad H. Hassoun, Paul Benedict Watta:
The Hamming associative memory and its relation to the exponential capacity DAM. 583-587 - Alan M. N. Fu, Hong Yan:
A shape classifier based on Hopfield-Amari network. 588-593 - Shanguang Chen, Jinhe Wei, Yongjun Zhang, Yong Bao:
A new model for bidirectional associative memories. 594-599 - Yannick Marchand, Jean-Luc Guérin, Jean-Paul A. Barthès:
A Hopfield's model to find implicit links in hypertexts. 600-605 - Simeon J. Simoff:
Handling uncertainty in neural networks: an interval approach. 606-610 - Lei Huang, Bai-Ling Zhang:
A novel neural-network-related approach for regression analysis with interval model. 611-616 - Fredric M. Ham, Emmanuel G. Collins Jr.:
A neurocomputing approach for solving the algebraic matrix Riccati equation. 617-622 - Youmin Zhang, X. Rong Li:
A new fast U-D factorization-based learning algorithm with applications to nonlinear system modeling and identification. 623-628 - Chen-Khong Tham:
A hierarchical CMAC architecture for context dependent function approximation. 629-634
Volume 2
- Murat Sönmez, Mingui Sun, Xiaopu Yan, Robert J. Sclabassi:
Extension of a training set for artificial neural networks and its application to brain source localization. 635-640 - Ruey S. Huang, Chung J. Kuo, Ling-Ling Tsai, Oscal T. C. Chen:
EEG pattern recognition-arousal states detection and classification. 641-646 - Stavros J. Perantonis, Dimitris A. Karras:
Tracking endocardial border motion in ultrasonic images by using neural networks and ARIMA modelling techniques. 647-652 - Markus Schwarz, Bedrich J. Hosticka, R. Hauschild, W. Mokwa, Michael Scholles, H. K. Trieu:
Hardware architecture of a neural net based retina implant for patients suffering from retinitis pigmentosa. 653-658 - Luke Theogarajan, Lex A. Akers:
Silicon models of visual cortical processing. 659-664 - Sean D. Murphy, Edward W. Kairiss:
Sensitivity of biological neuron models to fluctuations in synaptic input timing. 665-669 - Károly Lotz, Ladislau Bölöni, Tamás Roska, József Hámori:
A cellular neural network model of the time-coding pathway of sound localization-hyperacuity in time. 670-675 - Richard H. Tsai, Bing J. Sheu, Theodore W. Berger:
VLSI design for real-time signal processing based on biologically realistic neural models. 676-681 - Hui-Huang Hsu, LiMin Fu, José C. Príncipe:
Context analysis by the gamma neural network. 682-687 - Radu Dogaru, A. T. Murgan, Stefan Ortmann, Manfred Glesner:
Searching for robust chaos in discrete time neural networks using weight space exploration. 688-693 - Samuel W. K. Chan, James Franklin:
A brain-state-in-a-box network for narrative comprehension and recall. 694-699 - R. J. Craddock, K. Warwick:
Multi-layer radial basis function networks. An extension to the radial basis function. 700-705