Probabilistic neural networks

DF Specht�- Neural networks, 1990 - Elsevier
DF Specht
Neural networks, 1990Elsevier
By replacing the sigmoid activation function often used in neural networks with an
exponential function, a probabilistic neural network (PNN) that can compute nonlinear
decision boundaries which approach the Bayes optimal is formed. Alternate activation
functions having similar properties are also discussed. A fourlayer neural network of the type
proposed can map any input pattern to any number of classifications. The decision
boundaries can be modified in real-time using new data as they become available, and can�…
Abstract
By replacing the sigmoid activation function often used in neural networks with an exponential function, a probabilistic neural network (PNN) that can compute nonlinear decision boundaries which approach the Bayes optimal is formed. Alternate activation functions having similar properties are also discussed. A fourlayer neural network of the type proposed can map any input pattern to any number of classifications. The decision boundaries can be modified in real-time using new data as they become available, and can be implemented using artificial hardware “neurons” that operate entirely in parallel. Provision is also made for estimating the probability and reliability of a classification as well as making the decision. The technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. For one application, the PNN paradigm was 200,000 times faster than back-propagation.
Elsevier