Asymptotic Behavior of an Artificial Neural Network Defined on Multipartite Directed Graph

: Problem statement: Artificial Neural Network (ANN) are simple models to mimic some essential features of the complex central nervous system. ANN models are realistic due to their inherent stochastic nature of neural computation and strong synchronicity. Different ANN models are associated with directed and signed graphs. The present study proceeded by relaxing certain simplifying assumptions in the ANN model. Approach: It was assumed that the connected graph associated with the ANN is a multipartite directed graph whose connection comprising of four blocks and two blocks are either both symmetric or both anti symmetric. The convergence of such network was studied in the present research with the help of Lyapunov functional. Results: Attractors (fixed points) of such ANN and also limit cycles of different orders are investigated. Bounds of transient length of the neural network were also calculated. Numerical simulation in support of the results was also depicted. Conclusion: It was shown that under synchronous updating rule such networks converge to a fixed point or to a limit cycle of period 2 or 4. The bound of transient length was discussed. Conclusions were drawn from the simulation studies carried out in support of the results.


INTRODUCTION
Artificial Neural Networks (ANN) are simple models to capture some essential features of the central nervous system. The basic processing unit is (McCulloch and Pitts, 1943) binary units (or linear threshold gate) which is activated only when the weighted sum of its input is larger than some threshold: w.x = 0 It is also assumed that a neuron has not fired within the absolute refractory period, the basic cycle-time of the network and the accumulated Post Synaptic Potential (PSP) decays instantly to zero and the build up of the PSP starts all over again. So the collection of signals, making up the input to a neuron, does not really represent a well defined network state in the dynamical evolution of the network.
It is known that on the average every neuron attempts to fire an action potential in every unit in total cycle-time, independently of all other neurons. This implies a mean updating rate is the inverse of the basic cycle-time. It is difficult to capture such complicated dynamics and instead two simplified versions of network dynamics, namely, synchronous (or parallel) and sequential dynamics. In parallel dynamics, the PSP on each neuron at time t = n is determined by the activation of all other neurons in the time interval n-1<t<n. At the beginning of each period the neuron starts from a zero PSP, that is, after every unit of time all return to these resting membrane potential. This type of dynamics has already been introduced (Caianiello, 1961;Amari, 1972;Little, 1974). Such dynamics is also very favorite in the study of cellular automata (Hoffman, 1987). But ANN models are realistic due to their inherent stochastic nature of neural computation and strong synchronicity. Different forms of the synaptic matrix give rise to different asymptotic dynamics of an ANN under different updating rule. The computational performance of a network, namely, memory, recall is associated with the asymptotic dynamics of the network. There are three types of basic asymptotic dynamics in an ANN namely, chaotic, limit cycle, fixed points. ANN models may never mimic the richness and novelty of human cognition under normal or pathological condition. The ANN, however, may be considered as a metaphor for high brain function which can lead to manifestation of schizophrenia and mania in such disorder (Little and Shaw, 1978).
In ANN models (Hoffman, 1987) many simplifying assumptions were made from neurobiological perspective. This can be summarized as follows: (i) Each formal neuron has as many excitatory and inhibitory synapses emanating from axon (ii) Each of the neuron can receive input from every neuron and can send output to it and each neuron does not connect directly to itself (iii) Symmetric connectivity (iv) Updating rule is synchronous/asynchronous Hopfield (1982) work consists of modeling of Associative Memory by neural network. Given a set of Boolean vector to be memorized, he defines an ANN whose symmetric weights are defined by the Hebbian rule between patterns. Using the analog with spin glass problem he proved that such pattern is a fixed point of the network. He further showed that the sequential network dynamics is driven by a Lyapunov operator.
In the present study we have relaxed slightly the assumptions (ii) and (iii). It is assumed that the connected graph is associated with the ANN is multipartite directed graph and efficacy of the synapses communicating the output neuron j to i is not always equal to the efficacy of the synapse communicating the output of neuron i as input to neuron j. More precisely we have considered connection matrix comprising four blocks out of which two blocks along with the left hand diagonal are null matrices and remaining two blocks are either (i) both symmetric or (ii) both antisymmetric. The convergence of such network is studied in the present study with the help of Lyapunov functional. It is shown that under synchronous updating rule such networks converge to a fixed point or to a limit cycle of period 2 or 4. The bound of transient length is also discussed. Some simulation results in support of our results are also included.

MATERIALS AND MATHODS
The model: Let { } { } n n π : 0,1 0,1 → be a mapping whose components 1 n π ,..., π are threshold functions: where, A = (a ij ) is a (n×n) real matrix known as the connection matrix, a ij >0 means excitatory synaptic weight and a ij <0 an inhibitory weight and (b 1 , b 2 ,…,b n ) is a real threshold vector. The synchronous iteration scheme on π is given by: i I 1, 2, , n , t0 where, π(x) is the threshold function.

Lyapunov functional:
We introduce here the Lyapunov functional driving the network dynamics. These operators were first introduced by (Hopfield, 1982) and subsequently by Goles, (1990) ;Hoffman, (1987) to analyze the fixed point behavior of random sequential iteration of associative network. As for that applications the study of synchronous updating rule and of memory updating were defined and developed in (Hoffman, 1987).

Case 1:
We first consider the case when the non null blocks P and Q of the connection matrix A are both symmetric in nature. We can write the blocks P and Q in matrix form as follows: P = (p ij ) n-p × n-p : iεI 2 and jεJ 1 Q = (q ij ) p × p : iεI 1 and jεJ 2 The synchronous dynamics given by (3) with the strictness condition: We define the Lyapunov functional as follows: E (x(t)) = (2x (t) 1) a (2x (t 1) 1) (2x (t) 1) a (2x (t 1) 1) + (2b a )(2x (t) 1 + 2x (t 1) 1) (2b a )(2x (t) 1 + 2x (t 1) 1) Lemma 1: If A be the connection matrix of a neural network then E s (x(t)) is a Lyapunov functional for the synchronous iteration given by (3) with (4).
Since the network evolves in a finite set {0,1} n , the synchronous iteration scheme defined by (3) and (4) converges for any initial configuration in a finite number of steps to a steady state, that is, to a finite cycle. Therefore there exists number p(x) called the period of the cycle of the network, such that: x(t+p(x)) = x(t) Theorem 1: If A be the connection matrix consisting of the non null blocks P and Q of symmetric in nature the orbit of the synchronous iteration are only fixed point and /or cycle of length two.
Case 2: Next we consider the case when the blocks P and Q of the connection matrix A are both antisymmetric in nature and the updation rule is given by (3) with (4) with the self-dual condition given by: Here also we consider the Lyapunov functional defined by (5). Now using the self dual condition defined by (6) the Lyapunov functional takes the form: E (x(t)) = (2x (t) 1) a (2x (t 1) 1) (2x (t) 1) a (2x (t 1) 1) Now we show that a E (x(t)) is Lyapunov functional when the connection matrix A consists of both anisymmetric blocks with the synchronous updating rule: 2 (x (t) + x (t 2) 1) a (2x (t 1) 1) 2 (x (t) + x (t 2) 1) a (2x (t 1) 1) We have: Hence we obtained the result.
Theorem 2: If a E (x(t)) be the Lyapunov functional, then the orbit have a period T = 4.

Proof:
Let (x(0),x(1),…,x(T-1)) is an orbit of period T of the given dynamics then a 0 E (x(t)) E = is constant for t = 0, …, T-1. Hence x i (t+4) ≠ x i (t) for all t. Hence the orbit have a period T = 4.

Bound to transient length:
Since {0,1} n is a finite set, we can bound the Lyapunov functional E s in the following way.
Let {x(t)/ t = 0,…,q-1 } be a transient trajectory, that is: x(0)→x(1)→…→x(q-1)→x(q)→… where, q(x) is the first vector belonging to the steady state associated to the initial configuration x(0). The transient length of the neural network is defined as the greatest of such lengths. Since in the transient phase, E s is strictly decreasing there exists: E (x(t)) = (E (x(t))) = (E (x(t))) + (E (x(t)))