Everyone tries to forecast the future. Bankers need to predict credit worthiness of customers. Marketing analyst want to predict future sales. Economists want to predict economic cycles. And everybody wants to know whether the stock market will be up or down tomorrow. Now you can incorporate neural network as a powerful forecasting tool with MS Excel.
Understanding neural network is not difficult. This book is different from most of the books on this subject available on the market.
Programming skills like C++, Java NOT required
Minimal mathematic knowledge
Rocket science made easy
Theory And Technical Stuff:
Neural networks are very effective when lots of examples must be analyzed, or when a structure in these data must be analyzed but a single algorithmic solution is impossible to formulate. When these conditions are present, neural networks are use as computational tools for examining data and developing models that help to identify interesting patterns or structures in the data. The data used to develop these models is known as training data. Once a neural network has been trained, and has learned the patterns that exist in that data, it can be applied to new data thereby achieving a variety of outcomes. Neural networks can be used to
We have seen many different neural network models that have been developed over the last fifty years or so to achieve these tasks of prediction, classification, and clustering. In this book we will be developing a neural network model that has successfully found application across a broad range of business areas. We call this model a multilayered feedforward neural network (MFNN) and is an example of a neural network trained with supervised learning.
We feed the neural network with the training data that contains complete information about the characteristics of the data and the observable outcomes in a supervised learning method. Models can be developed that learn the relationship between these characteristics (inputs) and outcomes (outputs). For example, we can develop a MFNN to model the relationship between money spent during last week’s advertising campaign and this week’s sales figures is a prediction application. Another example of using a MFNN is to model and classify the relationship between a customer’s demographic characteristics and their status as a high-value or low-value customer. For both of these example applications, the training data must contain numeric information on both the inputs and the outputs in order for the MFNN to generate a model. The MFNN is then repeatedly trained with this data until it learns to represent these relationships correctly.
For a given input pattern or data, the network produces an output (or set of outputs), and this response is compared to the known desired response of each neuron. For classification problems, the desired response of each neuron will be either zero or one, while for prediction problems it tends to be continuous valued. Correction and changes are made to the weights of the network to reduce the errors before the next pattern is presented. The weights are continually updated in this manner until the total error across all training patterns is reduced below some pre-defined tolerance level. We call this learning algorithm as the backpropagation.
Process of a backpropagation
Forward pass, where the outputs are calculated and the error at the output units calculated.
Backward pass, the output unit error is used to alter weights on the output units. Then the error at the hidden nodes is calculated (by back-propagating the error at the output units through the weights), and the weights on the hidden nodes altered using these values.
The main steps of the back propagation learning algorithm are summarized below:
Step 1: Input
Steps 1 through 3 are often called the forward pass, and steps 4 through 7 are often called the backward pass. Hence, the name: back-propagation.
For each data pair to be learned a forward pass and backwards pass is performed. This is repeated over and over again until the error is at a low enough level (or we give up).
Calculations and Transfer Function
The behaviour of a NN (Neural Network) depends on both the weights and the input-output function (transfer function) that is specified for the units. This function typically falls into one of three categories:
For linear units, the output activity is proportional to the total weighted output.
For threshold units, the output is set at one of two levels, depending on whether the total input is greater than or less than some threshold value.
For sigmoid units, the output varies continuously but not linearly as the input changes. Sigmoid units bear a greater resemblance to real neurons than do linear or threshold units, but all three must be considered rough approximations.
It should be noted that the sigmoid curve is widely used as a transfer function because it has the effect of "squashing" the inputs into the range [0,1]. Other functions with similar features can be used, most commonly tanh which has an output range of [-1,1]. The sigmoid function has the additional benefit of having an extremely simple derivative function for backpropagating errors through a feed-forward neural network. This is how the transfer functions look like:
To make a neural network performs some specific task, we must choose how the units are connected to one another (see Figure 1.1), and we must set the weights on the connections appropriately. The connections determine whether it is possible for one unit to influence another. The weights specify the strength of the influence.
Typically the weights in a neural network are initially set to small random values; this represents the network knowing nothing. As the training process proceeds, these weights will converge to values allowing them to perform a useful computation. Thus it can be said that the neural network commences knowing nothing and moves on to gain some real knowledge.
To summarize, we can teach a three-layer network to perform a particular task by using the following procedure:
The advantages of using Artificial Neural Networks software are:
They are extremely powerful computational devices
Massive parallelism makes them very efficient.
They can learn and generalize from training data – so there is no need for enormous feats of programming.
They are particularly fault tolerant – this is equivalent to the “graceful degradation” found in biological systems.
They are very noise tolerant – so they can cope with situations where normal symbolic systems would have difficulty.
In principle, they can do anything a symbolic/logic system can do, and more.
Real life applications
The applications of artificial neural networks are found to fall within the following broad categories:
Manufacturing and industry:
· Beer flavor prediction
· Wine grading prediction
· For highway maintenance programs
· Missile targeting
· Criminal behavior prediction
Banking and finance:
· Loan underwriting
· Credit scoring
· Stock market prediction
· Credit card fraud detection
· Real-estate appraisal
Science and medicine:
· Protein sequencing
· Tumor and tissue diagnosis
· Heart attack diagnosis
· New drug effectiveness
· Prediction of air and sea currents
"The interactive book is really handy for
learning the content of the book. The dynamic examples illustrate the
theories clearly and give a deep impression to readers."
"The templates are excellent. I am very happy
with the results and will be spending a lot less on consultants."
Microsoft Excel is a registered trademark of Microsoft Corporation.