Study of Fractional Order Integro-Differential Equations by using Chebyshev Neural Network

: Recently fractional integro-differential equations have been proposed for the modeling of many complex systems. Simulation of these equations requires the development of suitable numerical methods. This paper is devoted to finding the numerical solution of some classes of fractional integro-differential equations by employing the Chebyshev Neural Network (ChNN). The accuracy of proposed method is shown by comparing the numerical results computed by using Chebyshev neural network method with the analytical solution


Introduction
Numerical methods are very power tools for solving the complicated problems in many fields (Bianca et al., 2009;Bhrawy and Alghamdi, 2012;Yang et al., 2014;Bhrawy and Aloi, 2013;Doha et al., 2011;Saha Ray, 2009;Mittal and Nigam, 2008;Saeedi and Samimi, 2012;Saeed and Sdeq, 2010;Ahmed and Salh, 2011). Newly, few numerical methods for solving the Fractional Differential Equations (FDEs) and Fractional Integro-Differential Equations (FIDEs) have been presented. Bhrawy and Alghamdi (2012;Yang et al., 2004) used collocation method to solve the nonlinear fractional Langevin equation involving two fractional orders in different intervals and the fractional Fredholm Integro-differential equations. Doha et al. (2011;Bhrawy and Aloi, 2003;, proposed the Chebyshev polynomials method to solve the nonlinear Volterra and the Fredholm Integro-differential equations of fractional order and the multiterm fractional orders differential equations. Irandoust-Pakchin and Abdi-Mazraeh (2013) applied the variational iteration method to solve the fractional Integro-differential equations with the nonlocal boundary conditions. Few other methods presented for solving the fractional diffusion; the fractional Integrodifferential and the fractional nonlinear Fredholm Integro-differential equations in (Saha Ray, 2009;Mittal and Nigam, 2008;Saeedi and Samimi, 2012;Saeed and Sdeq, 2010).
In this study, Chebyshev neural network method with a unit layer has been proposed to solve the integro-differential equations from fractional order. To minimize the computed error function, a neural network with feed forward and with fundamental error back propagation will be used. This paper deals with numerical analysis of fractional order integrodifferential equation as follows: with the following initial conditions: The problem of the nonlinear multi-order fractional differential equations studies by using the Chebyshev neural network to obtain the numerical results. The hidden layer was excluded by extending the input style by the Chebyshev polynomials and has been used a single layer neural network. The idea of the method is to find the semi-analytical solution of this kind equations with high accuracy.
In section 2, basic definitions of the fractional derivatives will be presented. In section 3, main results of proposed method including learning algorithm of the Chebyshev neural network, Chebyshev neural network formulation for the fractional integrodifferential equations, computation of gradient for the Chebyshev neural network and algorithm of the proposed method will be presented. Applications of the proposed method will be shown in Section 4 with solving few examples. Finally we end the paper with some conclusion in section 5.

Basic Definitions of the Fractional Derivatives
Few definitions of the fractional derivative of order α > 0 can find (Miller and Ross, 1993). The Riemann-Liouville and Caputo fractional derivative are most commonly definitions that we will use in this study. Also, the Riemann-Liouville fractional integration of order α which using in this research is defined as follows (Grigorenko and Grigorenko, 2003;Podlobny, 2002): In Equation 2 Γ is gamma function and we have: Caputo and Riemann-Liouville fractional derivatives of order α (Fadi et al., 2011), respectively can be define: where, D α * is the Caputo fractional derivative. The properties of the operator J α are defined: Fundamental property of the Caputo's fractional derivative are shown as follows (Fadi et al., 2011):

Main Results
Learning Algorithm of the Chebyshev Neural Network Figure 1 shows the structure of proposed Chebyshev neural network which consists of the unit input block, one output block and a functional extension block based on the Chebyshev polynomials. In this network input data is extended to several terms using Chebyshev polynomials. The learning algorithm can be used to update the network parameters and minimizing the error function. Functions F(z) = z; sinh(z); tanh(z) are considered as activation functions. The network output with input data x and weights w may be computed as (Mall and Chakraverty, 2014;2015): where, z is a weighted sum of expanded input data: where, x is the input data, T j-1 (x) and w j with j = 1, 2,...,m denote the expanded input data and weight vector, respectively. Two first Chebyshev polynomials are, T 0 (x) = 1, T 1 (x) = x. The higher order Chebyshev polynomials can be computed by, T n+1 (x) = 2xT n (x)-T n-1 (x), where T n (x) denotes nth order Chebyshev polynomial. Here n dimensional input pattern is expanded to m dimensional enhanced Chebyshev polynomials. Now, weights of proposed network may be modified by using the fundamental back propagation (Mall and Chakraverty, 2014;2015): where, η and k are the learning parameter and iteration step, respectively; which is used to update the weights and E(x;w) is the error function.

Chebyshev Neural Network Formulation for the Fractional Integro-Differential Equations
A general form of the fractional integro-differential equations can be shown as follows: where, Ψ defines the structure of fractional integrodifferential equations, y(x) and r mean the solution and differential operator, respectively. If y t (x,w) indicate the trial solution with variable parameters w, then from Equation 12 we will have: From Equation 13, the following minimization equation can conclude (Mall and Chakraverty, 2014;2015;Hoda and Nagla, 2011): Now, we note the general form of fractional order integro-differential equations as shown: with the boundary condition, y(0) = γ0. The trial solution y t (x,w) of feed forward neural network with input x and variable parameters w for above equation is written as: where, N(x;w) = z; sinh(z); tanh(z). The error function for the Equation 15 in general form can be shown as (Mall and Chakraverty, 2014;2015): To minimize the error function E(x;w) respect to w and corresponding to input x, we differentiate E(x;w) with respect to the parameters w.

Gradient Computation for the Proposed Algorithm
The fractional gradient of proposed network output with respect to network inputs when N(x;w) = z; sinh(z); tanh(z) is computed as below. Fractional derivatives of N(x;w) can be define as: where, α > 0 is order of fractional derivative and n ∈ N is smallest integer greater than α.

If N(x;w) = z
Fractional derivatives of N(x;w) = z with respect to input x is as follows: Fractional derivatives of N(x;w) = sinh(z) with respect to input x is as follows:

If N(x;w) = tanh(z)
Fractional derivatives of N(x;w) = tanh(z) with respect to input x is as follows: where, w j denote parameters of network and In end, approximate solutions can be computed by using the converged proposed Chebyshev neural network results in Equation 16.

Applications
In this section, we consider the fractional order integro-differential equations to show the powerfulness of the proposed method. Active functions updated as, F(z) = z; sinh(z); tanh(z) are considered to find the numerical results with high accuracy. The first five Chebyshev polynomials have been used.

Algorithm 1 Chebyshev Neural Network Algorithm
• Calculating the Chebyshev polynomials T 0 , T 1 , ..., T m • Replacing the Chebyshev polynomials in the Equation 10 • Obtained z in the step 2 Can be replaced in the equations N(x;w) = z; sinh(z); tanh(z) • Obtained N(x;w) in step 3 Can be replaced in the training answer y t (x;w) • In continue, training answer y t (x;w) will be replaced instead y(x) in the fractional differential equation. • Now, interval [a, b] can be partitioned to n equal parts with arbitrary distance h. All values for the fractional differential equation in corresponding points can be defined as, E 0 , E 1 ,..., E n , respectively • Error function E can be defined as, E(x;w) = 2 1 1 2 n i i E = ∑ . This error function must to minimize respect to unknown weights w 1 , w 2 ,..., w n • To minimize the error function E respect to unknown weights, Gradient of E respect to w 1 , w 2 ,..., w n will be used as,

( )
; E x w wj ∂ ∂ = 0. This give us a system with n equations and n unknowns • Solving the obtained system by using matrix method or numerical methods as Genetic Algorithm, Bee Colony Optimization Algorithm, Ant Colony Optimization Algorithm, when N(x;w) is linear or nonlinear, respectively • The weights may be modified by using the back propagation principle Equation 11 • After replacing modified weights in the training answer y t (x;w), can get approximate solutions for the mentioned equations in this study

Example 1
First we consider the following fractional integrodifferential equation (Fadi et al., 2011): The exact solution of the above equation is given as: The proposed network was trained for ten points in interval [0, 1] with the first five Chebyshev polynomials (m = 5). Comparison of absolute and maximum absolute errors between exact and ChNN solutions with F(z) = z; sinh(z); tanh(z) are cited in Table 1 and 2, respectively. Figure 2 shows comparison between analytical and ChNN solutions when the active function is as, F(z) = z. Plot of the absolute error between analytical and ChNN results with F(z) = z is shown in Fig. 3. Comparison between analytical and ChNN solutions and absolute error between them with F(z) = sinh(z) are showed in Fig. 4 and 5, respectively. Numerical results between analytical and ChNN solutions and absolute error between them with F(z) = tanh(z) are cited in Fig. 6 and 7, respectively. According Table 2 the maximum absolute error for active functions F(z) = z; sinh(z); tanh(z) are as 3.40000×10 −40 , 0.13542×10 −2 and 0.40843×10 −1 , respectively. Therefore, the analytical and ChNN solutions have good agreement with the active function F(z) = z.

Example 2
We consider the following fractional integrodifferential equation (Fadi et al., 2011): Comparison between analytical and ChNN solutions with the active function as, F(z) = z is cited in Fig. 8. Plot of absolute error between analytical and ChNN solutions with F(z) = z is shown in Fig. 9.

Example 3
Consider the following fractional integro-differential equation (Hasan et al., 2013): (28) Figure 10 shows comparison between analytical and ChNN solutions with the active function as, F(z) = z. Figure 11 presents plot of absolute error between analytical and ChNN solutions with F(z) = z.

Example 4
Consider the following fractional integro-differential equation (Hasan et al., 2013): Comparison between analytical and ChNN solutions and plot of absolute error with the active function as, F(z) = z are presented in Fig. 12 and 13, respectively.

Example 5
Consider the following fractional integro-differential equation (Changqing and Jianhua, 2013): Comparison between analytical and ChNN solutions with active function as, F(z) = z is showed in Fig. 14. Plot of absolute error between analytical and ChNN solutions with F(z) = z is showed in Fig. 15.

Example 6
Consider the following fractional integro-differential equation (Mittal and Nigam, 2008 with the initial conditions y(0) = 0 and exact solution as: ( ) 3 y t t = (34) Figure 16 shows comparison between analytical and ChNN solutions with active function as, F(z) = z. Figure  17 presents plot of absolute error between analytical and ChNN solutions with F(z) = z.

Conclusion
Chebyshev neural network is applied for solving the linear and the nonlinear fractional integrodifferential equations. A Chebyshev neural network with single layer is presented to prevail the difficulty and hardness of this type of equations. Numerical results from the proposed method are compared with the analytical solutions. The maximum absolute error between exact and Chebyshev neural network solutions with F(z) = z, is showing a good agreement between Chebyshev neural network and analytical solutions. Comparisons of the obtained results from Chebyshev neural network with exact solutions show that this method is a capable tool for solving this kind of the linear and the nonlinear problems. This method can be applied to solve any kind of the complex ordinary and the partial differential equations from the fractional order.