Artificial Neural Networks Essay

PAPER ON ARTIFICIAL NEURAL NETWORKS ABSTRACT The developments in Artificial Intelligence (AI) appear promising, but when applied to real world intelligent task such as in speech, vision and natural language processing, the AI techniques show their inadequacies and ‘brittleness’ in the sense that they become highly task specific. The computing models inspired by biological neural networks can provide new directions to solve problems arising in natural tasks. The purpose of this paper is to discuss the Characteristics and Applications of Artificial Neural Networks.

In Characteristics of Neural Networks, we will discuss about the Features of Biological Neural Networks and Performance comparison of computer and Biological Neural Networks. In applications we discuss about Direct applications which include Pattern classification, Associative memories, Optimization and Control applications and Application Areas. At last we have conclusion and Bibliography CHARACTERISTICS OF NEURAL NETWORKS FEATURES OF BIOLOGICAL NEURAL NETWORKS:

We will write a custom essay sample on
Artificial Neural Networks Essay
or any similar topic only for you
Order now

Some attractive features of the biological neural network that made it superior to even the most sophisticated Artificial Intelligence computer system for pattern recognition tasks are the following: • Robustness and fault tolerance: The decay of nerve cells does not seem to affect the performance significantly. • Flexibility: The network automatically adjusts to a new environment without using any preprogrammed instructions. • Ability to deal with a variety of data situations: The network can deal with information that is fuzzy, probabilistic, noisy and inconsistent. Collective computation: The network performs routinely many operations in parallel and also a given task in a distributed manner. PERFORMANCE COMPARISION OF COMPUTER AND BIOLOGICAL NEURAL NETWORKS: A set of processing units when assembled in a closely interconnected network, offers a surprisingly rich structure exhibiting some features of the biological neural network. Such a structure is called an artificial neural network (ANN). Since ANNs are implemented on computers, it is worth comparing the processing capabilities of a computer with those of brain.

Speed: – Neural networks are slow in processing information. For the most advanced computers the cycle time corresponding to execution of one step of a program in the central processing unit is in the range of a few nanoseconds. The cycle time corresponding to neural event prompted by an external stimulus occurs in milliseconds range. Thus the computer processes information nearly a million times faster. Processing: – Neural networks can perform massively parallel operations. Most programs have large number of instructions, and they operate in a sequential mode one instruction after another on a conventional computer.

On the other hand, the brain operates with massively parallel operations, each of them having comparatively fewer steps. This explains the superior performance of human information processing for certain tasks, despite being several orders of magnitude slower compared to computer processing of information. Size And Complexity: – Neural networks have large number of computing elements , and the corresponding is not restricted to with in neurons. The number of neurons in a brain is estimated to be about 10^11 and the total number interconnections to be around 10^15.

It is this size and complexity of connections that may be giving the brain the power of performing complex pattern recognition tasks, which we are unable to realize on a computer. The complexity of brain is further compounded by the fact that computing takes place not only inside the cell body, or soma, but also outside in the dendrites and synapses. Storage: – Neural networks store information in the strengths of the interconnections. In a computer, information is stored in the memory, which is addressed by its location. Any new information in the same location destroys the old information.

In contrast, in a neural network new information is added by adjusting the interconnecting strengths, with out destroying the old information. Thus information in the brain is adaptable, where as in the computer it is strictly replaceable. Fault Tolerance: – Neural networks exhibit fault tolerance since the information is distributed in the connections throughout the network. Even if a few connections are snapped or a few neurons are not functioning, the information is still preserved due to the distributed nature of the encoded information.

In contrast, computers are inherently not fault tolerant, in the sense that information corrupted in the memory cannot be retrieved. Control Mechanism: – There is no central control for processing information in the brain. In a computer there is a control unit, which monitors all the activities of computing. In a neural network each neuron acts based on the information locally available, and transmits its output to the neurons connected to it. Thus there is no specific control mechanism external to the computing task.

Currently, fuzzy logic concepts are being used to enhance the capability of the neural networks to deal with real world problems such as in speech, image processing, natural language processing and decision-making. ARTIFICIAL NEURAL NETWORKS: TERMINOLOGY Processing Unit: We can consider an artificial neural network (ANN) as a highly simplified model of a structure of the biological neural network. ANN consists of interconnected processing units. The general model of a processing unit consists of summing part followed by an output part.

The summing part receives N input values, weights each value, and computes a weighted sum. The weighted sum is called the activation value. The output part produces a signal from the activation value. The sign of the weight for each input determines whether the input is excitatory (positive weight) or inhibitory (negative weight). The inputs could be discrete or continuous data values, and likewise the outputs also could be discrete or continuous. The input and output could also be deterministic or stochastic or fuzzy. Interconnections:

In an artificial neural network several processing units are interconnected according to some topology to accomplish a pattern recognition task. Therefore the inputs to a processing unit may come from the outputs of other processing units, and/or from external sources. The output of each unit may be given to several units including it. Operations: In operation, each unit of an ANN receives inputs from other connected units and/or from an external source. A weighted sum of the inputs is computed at a given instant of time. The activation value determines the actual output from the output function unit, i. . , the output state of the unit. The output values and other external inputs in turn determine the activation and output states of the other units. UPDATE: In implementation, there are several options available for both activation and synaptic dynamics. In particular, the updating for the output states of all the units could be performed synchronously. For each unit, the output state can be determined from the activation value either deterministically or stochastically. APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS In the applications two different situations exist: The known neural networks concepts and models are directly applicable. • There appears to be potential for using the neural networks ideas, but it is not yet clear how to formulate the real world problems to evolve suitable neural network architecture. Apart from the attempts to apply some existing models for real world problems, several fundamental issues are also being addressed to understand the basic operations and dynamics of the biological neural network in order to derive suitable models of artificial neural networks.

DIRECT APPLICATIONS: Pattern classification: Pattern classification is the most direct among all applications of neural networks. Infact, neural networks became very popular because of the ability of a multi layer feed forward neural network to form complex decision regions in the pattern space for classifications. Many pattern recognition problems, especially character or other symbol recognition and vowel recognition, have been implemented using a multi layer neural network.

Note, however, that these networks are not directly applicable for situations where the patterns are deformed or modified due to transformations such as translation, rotation and scale change, although some of them may work well even with large additive uncorrelated noise in the data. Direct applications are successful, if the data is directly presentable to the classification network. Three such cases are considered for detailed discussion in this section. They are: • Recognition of Olympic games symbols • Recognition of characters Making an opening bid from dealt hand in the card game of contract bridge Associative memories: The objective of an associative memory is to store a pattern or data for later recall with partial or noisy version of the pattern as input, or to store association between two patterns for later recall of one of the patterns given to other. Both feedback and feed forward topologies of neural networks are directly useful for these applications. Associative memory, if used in a feedback structure of the Hop field type, can function as a content addressable memory as well.

The stable states of the network, which represent the energy minima or basins of attraction, are used to store the pattern information. In a feed forward network the associations corresponding to the input-output pattern pairs are stored in the weights of the networks. Applications of these networks for associative memory are direct, if the patterns are available in the form of one or two-dimensional arrays of values. Associative memories as content addressable memories are quite powerful. Optimization: One of the most successful applications of neural network principles is in solving optimization problems.

There are many situations where a problem can be formulated as minimization or maximization of some cost function or objective function subject to certain constraints. It is possible to map such a problem on to a feedback network, where comparing the cost function of the problem identifies the units and connection strengths. The solution of the problem lies in determining the state of the network at the global minimum of the energy function. In this process, it is necessary to overcome the local minima of the energy function. This is accomplished by adapting a simulated annealing schedule for implementing the search for global minimum.

The solution to an optimization problem by neural networks consists of the following steps: • Express the objective function or cost function and the constraints of the given problem in terms of the variables of the problems: Objective function (E)= cost + global constraints—–(1) • Compare the objective function in (1) with the energy function (2) of a feedback neural network of Hop field type to identify the states and the weights of the network in terms of the variables and parameters appearing in the objective function.

Energy function: E= -(1/2)? WijSiSj [i! =j] —–(2) • The solution to the optimization problem consists of determining the state corresponding to the global minimum of the energy function of the network. Assuming bipolar states for each unit, the dynamics of the network can be expressed as Si(t+1) = sgn(? wijSj(t))[j! i]—–(3) • Direct application of the above dynamics in search of a stable state may lead to a state corresponding to a local minimum of the energy function. In order to reach the global minimum, bypassing the dynamics of the network. For a stochastic unit the state of the unit is updated using a probability law, which is controlled by a temperature parameter (T). At low temperatures, the stochastic update approaches the deterministic update, which is dictated by the output function of the unit. The state of a neural network with stochastic units is described in terms of probability distribution. The probability distributions of the states at thermal equilibrium follow the Boltzmann-Gibb’s law namely P(s() ’ (1/? ) exp(-E(/T) ——(4) Graph Bipartition Problem: [pic] The problem is to partition a graph of N nodes equally as shown in the figure such that the connectivity between the two partitioned graphs is minimum.

The problem can be mapped on to a Hop field network, in which each bipolar make unit corresponds to a node in the graph with the state Si=+1 representing the node in one half in Si=-1 representing the nodes in the other half. Let Cij=1 if the nodes i and j are connected and Cij=0 if the nodes are not connected. Thus the cost term CijSiSj contributes a non-zero value only if the nodes are connected. We have CijSiSj=+1 if the nodes are in the same partition, and CijSiSj=-1 if they are in different partitions. For equal divisions of nodes (Si=0.

Therefore I give the cost term with equality constraint is given by E=-(1/2)(CijSiSj+((/2)((Si)^2 [i, j] Where the positive constant ‘(’ is used to indicate the relative strength of the two terms in the energy function. Traveling Salesman Problem: For a given number of cities (N) and their intercity distances, the objective is to determine a closed loop of the tour of the cities such that the total distance is minimized subjected to the constraints that each city is visited only once, and all cities are covered in the tour.

Denoting the state of a binary unit of a Hop field network as Sia where Sia=1 indicates that the city a is to be visited at the ith stage of the tour, we can write the following cost function E=((dabSia(S(i-1)b+S(i+1)b)+((/2)((SiaSib+((/2)((SiaSja+((/2)((Sia-N)^2 Where the first term gives the total distance in the tour, with dab representing the distance between the cities a and b. The second term vanishes if no more one city is visited at each stage. The third term vanishes if each is visited not more than once.

The last term vanishes when each city is visited exactly once and all cities are covered in the tour. The positive constraints (,( and ( denote the relative importance given to the constraints. A solution to the traveling salesman problem is obtained by determining the stable state of the network using either a deterministic relaxation procedure or a stochastic relaxation procedure with an annealing schedule. CONTROL APPLICATIONS: There are several situations in control applications where the principles of neural networks can be directly applied.

The applications include process control, robotics, industrial manufacturing, aerospace and several others. The main task in a control situation is to generate an appropriate input signal to the physical process to obtain the desired response from the plant. The controller generates the actuating signal when the external input is given. The design of a controller depends on the nature of the plant and the way the input is derived for the controller for the operation of the plant. The plant may be static and dynamic. For a static plant, the transfer function is given by constant.

For a dynamic plant, the transfer function is given by the ratio of the Laplace transform of the plant’s output to the Laplace transform of the plant’s input. There are two ways of controlling a plant: open loop control and feedback control. In an open loop control the controller consists of cascade of system and the inverse of the plant. The system is used to achieve the desired response for the input. The controller thus generates and actuating signal to the plant to obtain the desired response at the output of a plant.

This needs inverse transfer function of a plant, and the plant should not change its characteristic during its operations. Both these problems are overcome in the feedback control mechanism where the controller is designed in such a way that the output becomes the independent of the plant transfer function. APPLICATION AREAS: The excitement in neural networks started mainly due to difficulties in dealing with problems in the field of speech, image, natural language and decision-making using known methods of pattern recognition and artificial intelligence.

Several of these problems have been attempted using the principles of neural networks. The main issue in all these problems is the representation of the real world problem in a system . The power of a neural network can be exploited provided the problem can be well represented in the network on direct applications. But in the application areas, the poor and fragile performance of the neural network based system may be attributed to the weakness in the input processing and the mapping of the problem on to the neural network model.

Since problems in speech, image, natural language and decision making seem to be solved effortlessly by human beings, our expectations from an artificial systnm are also high. It is worth remembering that the human pattern recognition processing is an integrated system of data acquisition, input preprocessing, feature extraction understanding. It is not feasible to access the performance of each of these processes in isolation. CONCLUSION:

The field of artificial neural networks came into prominence mainly because of our inability to deal with natural tasks such as inspeech, image processing, decision-making and natural language processing. Our discussion on some tasks in these areas suggest that we still have not succeeded in realizing the natural human like preprocessing of speech and images for feature extraction and in modeling the higher levels of cognition for decision making and for natural language processing.

It is likely that new models may evolve to deal with issues such as invariant pattern recognition, interaction of local and global knowledge, stability, plasticity, feature extraction from temporal sequences like image sequences and matching patterns at semantic level. Bibliography: • IEEE Computers, Oct. 1996. • Artificial Neural Networks, B. Yegnanarayana. • W. Ashby, Design for a Brain, New York: Wiley and Sons, 1952 • G. Burke, ‘Neural Networks: Brainy way to trade? ’, Futures, August. 1992.

×

Hi there, would you like to get such a paper? How about receiving a customized one? Check it out