Neural Networks
My Experience with Neural Networks
The following publications indicate what I have been doing in
neural networks
- 33. Howitt, I., V. Vemuri, J. H. Reed, and T. C. Hsio, "Novel
RBF single-user detector for multi-user channels," IEEE
Transactions on Vehicular Communications, (submitted)
- 32. Howitt, I., V. Vemuri, J. H. Reed, and T. C. Hsio, "Radial
basis function study with applications to digital communications,"
IEEE Transactions on Neural Networks, (submitted)
- 31. Padgett, M. L., Johnson, J. L. and V. Vemuri, "Real-world
applications of artificial neural networks to cardiac monitoring
using radar and recent theoretical developments,"SPIE, Orlando,
April 97.
- 30. Vemuri, V. and R. Rogers, (Eds.), Artificial Neural
Networks: Forecasting time series, CS Press of IEEE, 1993.
- 29. Vemuri, V., Advances in Artificial Neural Networks:
Concepts , Applications and Implementation Considerations, Lecture
on Videotape, Published by Computer Society Press of IEEE, 1992.
- 28. Vemuri, V. (Ed.), Tutorial on Artificial Neural Networks:
Concepts and Control Applications, IEEE Computer Society Tutorial
Series. Computer Society Press of IEEE, 1992.
- 27. Vemuri, V. (Ed.) Artificial Neural Networks: Theoretical
Concepts. IEEE Computer Society Technology Series, 1988.
- 26. Vemuri, V., Advances in Artificial Neural Networks:
Concepts , Applications and Implementation Considerations, Lecture
on Videotape, Published by Computer Society Press of IEEE, 1992.
- 25. Vemuri, V. (Ed.), Tutorial on Artificial Neural Networks:
Concepts and Control Applications, IEEE Computer Society Tutorial
Series. Computer Society Press of IEEE, 1992.
- 24. Vemuri, V. (Ed.) Artificial Neural Networks: Theoretical
Concepts. IEEE Computer Society Technology Series, 1988.
- 23. Vemuri, V. Use of Artificial Neural Networks in Control
Applications, in Advances in Computers, pp 203-250, Academic Press
(Ed. M. Yovits) Vol 36, 1993.
- 22. Styer, D. L. and V. Vemuri, "Artificial neural network
controller for multi-link structures," IEEE/IAS Intl. Conference
on Industrial Automation and Control, pp 37-42, Hyderabad, India.
January 5-7, 1995,
- 21. Styer, D. L. and V. Vemuri, "A comparison of adaptive
critic and chemotaxis in adaptive control," Mathematical and
Computer Modeling Journal, Vol. 21, No. 1/2, pp 109-118, 1995.
- 20. Howitt, I., V. Vemuri, J. H. Reed, and T. C. Hsio,
"Comparison of center estimation methods for RBF networks," Proc.
14th IMACS Congress, Vol. 3, pp 1304-1306, Atlanta, GA, July
11-15, 1994.
- 19. Howitt, I., J. H. Reed, V. Vemuri, and T. C. Hsio, "RBF
growing algorithm applied to the equalization and co-channel
interference rejection problem," Proc. IJCNN, Orlando, FL, pp
3571-3576, July 1994.
- 18. Howitt, I., J. H. Reed, V. Vemuri, and T. C. Hsio, Recent
developments in applying neural nets to equilization and
intereferece rejection, in Wireless Personal Communications:
Trends and Challenges, Edited by T. S. Rappaport, B. D. Woerner,
and J. H. Reed, pp 49-58, 1994.
- 17. Howitt, I., J. H. Reed, V. Vemuri, and T. C. Hsio, RBF
Growing algorithm applied to equilazation and co-channel
interference rejection problem, ICNN, Orlando, FL. June 1994.
- 16. Howitt, I., J. H. Reed, V. Vemuri, and T. C. Hsio, Recent
developments in applying neural nets to equalization and
intereference rejection, Third Virginia Tech. Symposium on
Wireless Personal Communications, pp 1-12, 1993.
- 15. Rogers, R. and V. Vemuri, Exploring phase space concepts
in the forecasting of time series with artificial neural nets,
Proc. SIMTEC/WNN'93, pp 333-338, San Francisco, November 1993.
- 14. Jang. G. S., F. U. Dowla, and V. Vemuri, "A comparison of
neural network performance for seismic phase identification," J.
Franklin Institute, Vol. 330, N0. 3, 505-524., May 1993.
- 13. Vemuri, V. and G. S. Jang, "Inversion of Fredholm integral
equations of the first kind with fully connected neural networks,"
J. of Franklin Institute. 329(2): 241-257, 1992
- 12. Styer, D. L. and V. Vemuri, "Adaptive critic and
chemotaxis in adaptive control," in Intelligent Engineering
Systems Through Artificial Neural Networks, Vol. 2, pp 161-166,
Proc. ANNIE'92, St. Louis, MO., Nov. 15-18, 1992.
- 11. Jang. G. S., F. U. Dowla, and V. Vemuri, "Performance
comparison of some neural network paradigms for solving the
seismic phase identification problem," Proc. SIMTEC/WNN'92, pp
707-713, Nov. 4-6, 1992. (received a Certificate of Merit).
- 10. Jang. G. S., F. U. Dowla, and V. Vemuri, "Performance of
neural networks for seismic phase identification: A comparative
Study," Proc. 2nd Pacific Rim Intl. Conference on Artificial
Intelligence, pp 253-258, Seoul, S. Korea, Sep. 16-18, 1992.
- 9. Anderson, R. W. and V. Vemuri, "Neural Networks Can be Used
to Generate Time-optimal Control Signals, " Neural Networks, Vol.
3, No. 3, pp??-??, 1992.
- 8. Jang. G. S., F. U. Dowla, and V. Vemuri, "Application of
neural networks for seismic phase identification,"Proc. IJCNN 91,
Singapore. pp 899-904, Nov. 1991.
- 7. Styer, D. L. and V. Vemuri, "Preprocessing of adaptive
critic inputs for adaptive control," In Artificial Intelligence in
Real-time Control, Ed. M. G. Rodd and G. J. Suski, pp 27-31,
Pergamon Press, UK, 1992. (Proc. of the 3rd IFAC International
Workshop on Artificial Intelligence in Real-time Control, Napa,
CA, Sep. 23-25, 1991.)
- 6. Vemuri, V. and G. S Jang, "Inversion of Fredholm Integral
Equations of the first kind with fully connected Neural Networks,"
Proc. SPIE Conference, Orlando, FL. 1-5 April 1991.
- 5. Vemuri, V. and G. S. Jang, "A Neural Network Method of
Solving Inverse Problems Arising from Fredholm Integral Equations
of the First Kind", Proc. Second Workshop on Neural Nets, WNN_AIND
91, pp 207-217, Auburn University, AL, Feb. 11-13, 1991.
- 4. Vemuri, V. and F. U. Dowla, On the Formulation of
Continuous Neural Network Models and Their Solution on a Systolic
Processor, in Proc. Symposium on Electronic Imaging: Science and
Technology, Santa Clara, CA. February 11-16, 1990, 1990.
- 3. Dupaguntla, N. R. and V. Vemuri. A Neural Network
Architecture for Texture Segmentation and Labelling. Proceedings
of the International Joint Conference on Neural Networks,
Washington, D.C., June 18-22, I:127-133, 1989.
- 2. Dowla, F. U., A. J. DeGroot, S. R. Parker and V. Vemuri.
Back Propagation Neural Networks: Systolic Implementation for
Seismic Signal Filtering. Neural Networks 1(30):138-153, 1989.
- 1. Talbot, E., F. U. Dowla, and V. Vemuri. Classification of
Seismic Events Using Neural Networks, p. 73-84. Proceedings of the
Artificial Intelligence Conference I:73-84, 1988.
- 10. Vemuri, V. and L. C. Jain, (Eds.), Computational
Intelligence in Fault Diagnosis, IEEE Press, 1996.(in preparation)
- 9. Padgett, M. and V. Vemuri, "Applications of Evolutionary
Systems in Industrail Electronics," Ed. (J. D. Irvin), CRC
Handbook on Industrial Electronics, Chapter 99, CRC Press, 1996
(in press)
- 8. Cooper, M. G., and V. Vemuri, "Genetic Algorithms," CRC
Handbook on Industrial Electronics, Ed. (J. D. Irvin), Chapter
101, CRC Press, 1996 (in press)
- 7. Vemuri, V. and W. Cedeno, "Industrial applications of
genetic algorithms, J of Network and Computer Applications,
January 1997 (accepted)
- 6. Jain, L. C. and V. Vemuri, ŇAn Introduction to Intelligent
Systems," Ed. (L. C. Jain and R. K. Jain), Hybrid Intelligent
Engineering Systems, Chapter 1, World Scientific Publishing Co,
Singapore. 1997 (in press)
- 5. Smart, J. A. and V. Vemuri, "Interactive simulated
annealing," Intl. Journal of General Systems, 25(2):119-146, 1996.
- 4. Vemuri, V. and W. Cedeño, Multi-Niche Crowding for
Multimodal Search, Book Chapter, Practical Handbook of Genetic
Algorithms: New Frontiers, Vol. 2 Ed. Lance Chambers, CRC Press,
1995
- 3. Cedeno, W., and V. Vemuri, "Genetic algorithms in aquifer
management, J of Network and Computer Applications, 19:171-187,
1996.
- 2. Cedeno, W., and V. Vemuri, "A new genetic algorithm for
multi-objective optimization in water resource management, Proc.
ICNN'95, Perth, Australia , November 1995.
- 1. Cedeno, W., and V. Vemuri, "Multi-niche crowding in genetic
algorithms and its application to the assembly of DNA restriction
fragments, Evolutionary Computation,, Vol. 2, No. 4, pp 321-345,
Winter 1994.
Introduction to Neural Networks
The field of cybernetics recognizes that information processing
originates with living creatures in their struggle for survival. From
this viewpoint, we can begin to consider information processing
techniques that are inherently different from those used in
conventional computations. Neurocomputers, based on principles found
in living systems, are highly parallel structures designed to
directly process information emanating from the external world,
without the intermediate step of symbolic representation. Central to
neurocomputers are artificial neural networks (ANNs). One of the
general goals of artificial neural network researchers is to
circumvent the inherent limits of serial digital computation.
An ANN is a network of artificial neurons. These artificial
neurons are specialized computational elements performing simple
computational functions. The manner in which these neurons are
interconnected defines the topology or architecture of the network.
Whereas a classical digital computer is programmed, an ANN is
trained. Adjusting the strengths of interconnections (weights) among
the neurons constitutes training or learning. The concept of memory
in a conventional computer corresponds to the concept of weight
settings. The processing and storage functions in ANNs are not
centralized and distinct; each neuron acts as a processor and the set
of weights associated with that neuron act as distributed storage. In
a typical ANN one can expect to find hundreds of processors and
thousands of storage elements.
In the parlance of physical sciences, a neural network is a
nonlinear dynamical system which is capable of mimicking some aspect
of cognition. If each neuron is visualized as an analog operational
amplifier, then a fully parallel analog computer would serve as an
excellent first approximation to one class of neural networks.
Methods based on neural networks have a distinctly different
flavor from those based on artificial intelligence (AI) techniques.
In classical AI, a symbolic representation of the external world is
the starting point and a digital computer is used as a symbol
manipulating engine. The symbol string obtained as a solution is
converted back into a physical representation for human cognition.
Expert systems, an off-spring of the AI school, attempt to capture
the domain knowledge of a problem in terms of IF..THEN..ELSE kind of
rules. Formulation of these rules is a tedious process and systems
built on this philosophy tend to be "brittle"; that is, any new
knowledge may force a radical redefinition of the rule base.
Artificial neural networks, by virtue of their training, exhibit a
more "plastic" behavior. For this reason, ANNs more appropriately
belong to a class of methods that are being dubbed as "soft
computing."
The term "soft computing" can be defined as a collection of
methods based on principles derived from neural networks, genetic
algorithms, fuzzy set theory, artificial life, and so on. The goal of
this emerging computational discipline is to solve computationally
hard problems not by brute force but by borrowing principles of
information processing from nature. Beginning with the early 1980's,
ANNs, as well some of the other soft computing methods, have been
used systematically to solve a variety of computationally hard
problems such as pattern recognition under real world conditions,
fuzzy pattern matching, nonlinear discrimination of noisy signals,
combinatorial optimization, nonlinear real-time control, and so on.
Our Experience with Neural Nets
The burgeoning literature in the field of ANN research is full of
examples of the validity, advantages and shortcomings of the new
paradigm. Our own experience covers only a small subset of the
problems that can be solved with neural nets.
(a) Analysis of Seismic Signals.
One of the important problems of post cold war politics is the
problem of verifying compliance with nuclear test ban treaties. Back
propagation on feed forward networks, unsupervised self-organizing
networks, radial basis function networks, probabilistic networks, as
well as adaptive resonance techniques have been used by members of
our team to discriminate underground nuclear explosions from
earthquakes by analyzing far-field seismic signals. We were also
successful in deducing seismic parameters such as the depth of an
event, as well as dip and slip. Although all ANN methods gave results
comparable or better than conventional techniques, the "conjugate
gradient back propagation with weight elimination" made
classification predictions with consistently better than 90%
accuracy.
(b) Ill-posed Problems.
Many inversion problems of remote sensing can be formulated as
Fredholm integral equations of the first kind. Inverse problems are
difficult to solve not only because they are ill-posed in the
Hadamard sense, but also because the associated matrices are
ill-conditioned. By using the sum of the squared errors as the energy
function of a Hopfield net, we were able to invert a variety of
poorly conditioned matrices arising out of Fredholm equations. We are
in the process of refining and applying this technique to deduce
ozone profiles from satellite data and to deduce atmospheric aerosol
concentrations from data gathered from Guidestar experiments being
conducted at the Lawrence Livermore National Laboratory.
(c) Time-series Prediction.
Chaotic phenomena abound in nature. Because there is often a well
defined deterministic generating process behind the observed chaos,
it should be possible to make short range predictions of their
behavior; it is well known that long range predictions are not
possible. We were able to train back propagation networks to make
short range predictions of well-known chaotic time series like the
Mackey-Glass series. This work has important implications in such
disparate areas as digital communications and stock portfolio
management. We are in the process of applying this method to identify
the internal structure of a digital shift register by studying the
pseudo-random bit sequence generated by the system.
(d) Modeling and System Identification.
Another problem that is akin to (b) and (c) is the identification
of mathematical models on the basis of experimental measurements. If
the model structure is not clear at the outset, non-parametric
identification procedures can play a useful role. Here, instead of
identifying the physical parameters of the system, one simply develps
a model that fits the observed experimental data. We are currently
experimenting with the use of recurrent neural nets as well as nets
with dynamic neurons in solving this problem.
(e) Troubleshooting and Diagnostics.
In this project we are trying to using neural nets for on-line
troubleshooting and diagnosis of data processing equipment. Here a
neural net is being used to categorize a troubleshooting problem
prior to searching a database of prior cases of a similar nature.
vemuri1@llnl.gov
Monday the 11th, Decemberr 1995