THE COMPLEX STATISTICS PARADIGM AND THE LAW OF LARGE NUMBERS

The five basic axioms of Kolmogorov define the prob ability in the real set R and do not take into cons ideration the imaginary part which takes place in the complex s t C, a problem that we are facing in applied mathematics. Whatever the probability distribution f the random variable in R is, the corresponding probability in the whole set C equals always to one , so the outcome of the random experiment in C can be predicted totally. This is the consequence of the f act that the probability in C is got by subtracting the chaotic factor from the degree of our knowledge of the syst em. In this study, I will evaluate the complex rand om vectors and their resultant that represents the who le distribution and system in the complex space C. I will also define imaginary and complex expectations and varia nces and I will prove the law of large numbers usin g the concept of the resultant complex vector. In fact, a fter extending Kolmogorov’s system of axioms, the n ew axioms encompass the imaginary set of numbers and t his by adding to the original five axioms of Kolmog orov an additional three axioms. Hence, the concept of c omplex random vector becomes clear, evident and it follows directly from the new axioms added. This re ult will be elaborated throughout this study using discrete probability distributions. Moreover, any experiment xecuted in the complex set C is the sum of the re al set R and the imaginary set M. Therefore, the whole proba bility distribution of random variables can be repr sented totally by the resultant complex random vector Z th at is used subsequently to prove the very well know n law of large numbers. In addition to my previous first paper, this second one elaborates the new field of “Complex Statistics” that considers random variables in the complex set C. Thus, the law of large numbers prove s that this complex extension is successful and fruitful.


I. INTRODUCTION
Abou Jaoude et al. (2010); Abou Jaoude (2005; 2007); Balibar (1980); Bell (1992); Benton (1996); Dalmedico Dahan et al. (1992); Ekeland (1991); Feller (1968); Gleick (1997); Hoffmann (1975) and Kuhn (1996) by defining the concept of probability using only five basic axioms, Kolmogorov was working in the set of real numbers and was not considering the imaginary part that takes place in the set of complex numbers.This is in fact a problem that occurs in many applications in mathematics and physics.By considering supplementary new imaginary dimensions to the event occurring in the "real" laboratory, the Kolmogorov's system of axioms can be extended to encompass the imaginary set of numbers.This can be done by adding to the original five axioms of Kolmogorov a complementary three axioms.Thus, any experiment can hence be executed in the complex set C which is the sum of the real set R represented by a real probability and the imaginary set M represented by the imaginary probability.No matter what the probability distribution of the random variable in R is, the corresponding probability in the whole set C is always equal to one.

JMSS
Therefore, the outcome of the random experiment occurring now in C is completely predictable.Consequently, chance and luck in R are replaced by total determinism in C. Actually the probability in C is evaluated by subtracting the chaotic factor from the degree of our knowledge of the system.This shows to be essential and leads always to a probability equals to one in the complex set.
Formally, the three supplementary and complementary axioms are: • Let P m = i(1-P r ) be the probability of an associated event in M (the imaginary part) to the event A in R (the real part).It follows that P r + P m /i = 1 where i 2 = -1 (the imaginary number) • We construct the complex number z = P r + P m = P r + i(1-P r ) having a norm 2 2 2 r m z P (P / i ) = + • Let Pc denote the probability of an event in the universe C where C = R + M. We say that Pc is the probability of an event A in R with its associated event in M such that: 2 We can clearly see that the system of axioms defined by Kolmogorov could be hence expanded to take into consideration the set M of imaginary probabilities P m .
By defining the chaotic factor 'Chf' as being equal to 2iP r P m and the degree of our knowledge |z| 2 as being equal to 2 2 r m P (P / i) + , it follows that: Pc 2 = Degree of our knowledge-chaotic factor = 1, therefore Pc = 1.This means that if we succeed to eliminate the chaotic factor in an experiment, the outcome probability will always be equal to one.One consequence of the results above is that: 1/2≤|z| 2 ≤1 and -1/2≤Chf≤0.
Moreover, according to an experimenter tossing a coin in R, it is a game of luck: the experimenter doesn't know the output.He will assign to each outcome a probability P r and will say that the output is not deterministic.But in the universe C = R + M, an observer will be able to predict the outcome of the game since he takes into consideration the contributions of M, so we write: Pc 2 = (P r + P m /i) 2 = |z| 2 -2iP r P m .So in C, all the hidden variables are known and this leads to a deterministic experiment executed in an eight dimensional universe (four real and four imaginary; where three for space and one for time in R and three for space and one for time in M).Hence Pc is always equal to 1.
In fact, the addition of new dimensions to our experiment resulted to the abolition of ignorance and non-determinism.Consequently, the study of this class of phenomena in C is of great usefulness since we will be able to predict with certainty the outcome of experiments conducted.In fact, the study in R leads to non-predictability and uncertainty.Therefore, instead of placing ourselves in R, we place ourselves in C then study the phenomena, because in C the contributions of M are considered and therefore a deterministic study of the phenomena becomes possible.Conversely, by considering the contribution of the hidden forces, we place ourselves in C and by ignoring them we restrict our study to nondeterministic phenomena in R.
I will describe in this study a powerful tool based on the concept of complex random vectors which is a vector representing the real and the imaginary probabilities of an outcome, defined in the added axioms by the term z = P r + P m .Then express the resultant complex random vector as the vector which is the sum of all the complex random vectors in the complex space.I will illustrate this methodology by considering a Bernoulli distribution, then a discrete distribution with N random variables as a general case.Afterward, I will prove the very well known law of large numbers using this new powerful concept.Boursin (1986); Dacunha-Castelle (1996); Dalmedico Dahan and Peiffer (1986); Gullberg (1997); Montgomery and Runger (2005); Poincaré (1968) and Walpole (2002) first, let us define the complex random vectors and their resultant by considering the following general Bernoulli distribution:

THE RESULTANT COMPLEX RANDOM VECTOR Of A BERNOULLI DISTRIBUTION
x j x 1 x 2 P rj P rl =p P r2 =q Where: x 1 and x 2 = The outcomes of the first and second random variables respectively P r1 and P r2 = The real probabilities of x 1 and x 2 respectively P m1 and P m2 = The imaginary probabilities of x 1 and x 2 respectively Science Publications

JMSS
We have: ∑ where, N is the number of random variables which is equal to 2 for this Bernoulli distribution.
The complex random vector corresponding to the random variable x 1 is: The complex random vector corresponding to the random variable x 2 is: The resultant complex random vector is defined as follows: 1 2 2 2 rj mj j 1 j 1 Z z z (p iq) (q ip) (p q) i(p q) 1 i 1 i(2-1) 1 i(N-1) The probability in the complex space C which corresponds to the complex random vector 1 z is 1 Pc and is computed as follows: This is coherent with the new complementary axioms defined for the extended Kolmogorov's system.
Similarly, Pc 2 corresponding to z 2 is: The probability in the complex space C which corresponds to the resultant complex random vector Z = 1+i is Pc and is computed as follows: where, S 2 is an intermediary quantity used in our computation of Pc.Pc is the probability corresponding to the resultant complex random vector Z in the universe C = R+M and is also equal to 1.In fact, Z represents both z 1 and z 2 that means the whole distribution of random variables in the complex space C and its probability Pc is computed in the same way as Pc 1 and Pc 2 .
By analogy with the case of one random variable j z where: 2 2 j j j Pc |z | Chf with (N 1) = − = , then for the vector: where the degree of knowledge is equal to .
Notice, if N = 1 in the above formula, then: Which is coherent with the calculations already done.
To illustrate the concept of the resultant complex random vector Z, I will use the following graph (Fig. 1).
Science Publications JMSS Fig. 1.The resultant complex random vector Z = z 1 +z 2 in the complex space C

GENERALIZATION: THE RESULTANT COMPLEX RANDOM VECTOR Z OF A DISCRETE DISTRIBUTION
Chan Man Fong et al. (1997); Greene (2000;2004) and Warusfel and Ducrocq (2004) let us generalize what has been found above for a Bernoulli distribution by considering the general discrete probability distribution of N random variables with the resultant complex random vector Z: The complex random vector corresponding to the random variable x 1 is z 1 = P rl + P ml = p 1 + i(1-p 1 ) = p 1 + iq 1 .
The complex random vector corresponding to the random variable x 2 is z 2 = P r2 +P m2 = p 2 +i(1-p 2 ) = p 2 +iq 2 and so on … … … The complex random vector corresponding to the random variable x N is: The resultant complex random vector is defined as follows:

JMSS
Pc 1 corresponding to z 1 is: and so on ... ... ... Pc N corresponding to z N is: Pc is the corresponding probability to the resultant complex random vector Z = 1+ i (N-1) and is equal to: The corresponding probability of the resultant complex random vector Z = 1 + i(N-1) that represents the whole distribution of random variables in the complex space C. Guillen (1995); Mandelbrot (1997) and Srinivasan and Mehata (1978) as an example, let us consider the following discrete random distribution with four random variables that means we have in this case N = 4:

EXAMPLE OF A DISCRETE RANDOM DISTRIBUTION
We have: ∑ where, N is the number of random variables.
The complex random vector corresponding to x 1 is The complex random vector corresponding to x 2 is 2 r2 m2 1 3i z P P .4 4 = + = + The complex random vector corresponding to x 3 is The complex random vector corresponding to x 4 is 4 r4 m4 1 2i z P P .
The resultant complex random vector is: and is the probability corresponding to the resultant complex random vector Z that represents the whole distribution of the four random variables in the complex space C.
Science Publications

Second Case: A Distribution with N Random Variables
As a general case, let us consider then this probability distribution with N equiprobable random variables: We have here: And we can notice that: , where; Therefore, the degree of our knowledge corresponding to the resultant complex vector is = and thus we can verify that we have always: What is important here is that we notice the following: Take for example: | Z| 1 (4 1) N 4 0.625 0.5and N 4 Chf 2(4 1) 0.375 0.5 N 4 | Z| 1 (5 1) N 5 0.68 0.625and N 5 Chf 2(5 1) 0.32 0.375 N 5 | Z| 1 (10 1) N 10 0.82 0.68 and N 10 Chf 2(10 1) 0.18 0.32 N 10 | Z| 1 (100 1) N 100 0.9802 0.82 and N 100 Chf 2(100 1) 0.0198 0.18 N 100 | Z| 1 (1000 1) N 1000 0.998002 0.9802 N 1000 Chf 2(1000 1) and 0.001998 0.18 N 1000 We can deduce mathematically that: From the above, we can also deduce this conclusion: As much as N increases, as much as the degree of our knowledge in R corresponding to the resultant complex vector is perfect, that is, it is equal to 1 and as much as the chaotic factor that forbids us from predicting exactly the result of the random experiment in R approaches 0. Mathematically we say: If N tends to infinity then the degree of our knowledge in R tends to 1 and the chaotic factor tends to 0. Moreover: This means that we have a random experiment with only one outcome, hence, either P r = 1 or P r = 0, that means we have either a sure event or an impossible event in R. For this we have surely the degree of our knowledge is 1 and the chaotic factor is 0 since the experiment is either certain or impossible, which is absolutely logical.

The Law of Large Numbers and the
Resultant Complex Random Vector Z The law of large numbers says that: "As N increases, then the probability that the value of sample mean to be close to population mean approaches 1" We can deduce now the following conclusion related to the law of large numbers.
We can see, as we have proved, that as much as N increases, as much as the degree of knowledge of the resultant complex vector x 's correspond to the particles or molecules moving randomly in a gas or a liquid.So if we study a gas or a liquid with billions of such particles, N is big enough (e.g., Avogadro number) to allow that its corresponding temperature, pressure, energy tend to the mean of these quantities corresponding to the whole gas.This because the chaotic factor of the whole gas, that is, of the resultant complex random vector representing all the random particles or vectors, tends to 0, thus, the behavior of the whole system in R is predictable with great precision since the degree of our knowledge of the whole gas tends to 1. Figure 2 and 3 below illustrate this result.
Hence we have joined here two different key concepts which are: the law of large numbers and the resultant complex random vector.The first one comes from ordinary statistics and probability theory and the second from the new theory of complex probability and statistics.This looks very interesting and fruitful and shows the validity and the benefits of extending Kolmogorov's axioms to the complex set.Montgomery and Runger (2005); Mũller (2005); Orluc and Poirier (2005) and Walpole (2002) let us now compute the real, imaginary and complex expectations of the random variables.For this purpose, let us consider the following Bernoulli distribution:

EXPECTATIONS CORRESPONDING TO THE COMPLEX RANDOM VECTORS
We can see that: • The complex random vector corresponding to x 2 is 2 2 i z q ip 3 3 = + = + • The resultant complex random vector is: The expectation of the random variables with the real probability part is defined by: 2 r j rj 1 r1 2 r2 j 1 1 2 1 4 5 E (x) x P x P x P 1 2 3 3 3 3 3 The expectation of the random variables with the imaginary probability part is defined by: The expectation of the random variables corresponding to the complex random vectors is defined by: E r (x) + E m (x) = (x 1 p + x 2 q) + (x 1 iq + x 2 ip): (x p ix q) (x q ix p) x (p iq) x (q ip) x z x z x z Ec(x) 5 4i Ec(x) E (x) E (x) 3 3 illustrates the graphical relation between the three expectations: the real one, the imaginary one and the complex one.
We can notice that: The fact that |z 1 | = |z 2 | is not a special case for this distribution but is always true for any Bernoulli distribution having any probability values.Actually and in general, |z 1 | 2 = p 2 +q 2 and |z 2 | 2 = q 2 + p 2 , hence Due to the previous property, it can be shown for any Bernoulli distribution that: where, Z 1 i = − is the conjugate of the resultant complex random vector Z = 1+i, is the conjugate of the complex expectation vector Ec(x) = E r (x)+E m (x).And 2 | Z| Z Z = × , which derives from the well known theory of complex numbers.We can infer also that: It can be shown also, always for a Bernoulli distribution, that: Moreover, in the same distribution, we can deduce also: All these relations prove to be valid for any Bernoulli distribution.
Numerically, our degree of our knowledge of the resultant complex random vector Z = 1+i is: 5 / 3 i 4i / 3 5 / 3 4 / 3 9 / 3 1 6 6 6 2 Hence, It can be verified in all cases and for any distribution that: Thus, we conclude that for any Bernoulli distribution we have: = degree of our knowledge of the resultant complex random vector in terms of Z and the complex expectation Ec of the random variable in the universe C = R + M and = The chaotic factor of the resultant complex random vector Z.Consequently, the resultant probability in C is:

Case 1: A General Distribution
Let us now determine the other characteristics for a general discrete probability distribution which are the real, imaginary and complex variances of the random variables.For this purpose, let us consider the following general probability distribution for N random variables: The expectation corresponding to the imaginary part of the random variables x j is defined by: N m j mj 1 m1 2 m2 j 1 N mN 1 1 2 2 N N E (x) x P x P x P x P x iq x iq x iq The expectation corresponding to the complex probability of the random variables x j can be computed by: x p ) (x iq x iq x iq ) (x p ix q ) (x p ix q ) (x p ix q ) x (p iq ) x (p iq ) x (p iq ) x z x z x z x z Ec(x) x iq ) (x p ix q ) (x p ix q ) (x p ix q ) x (p iq ) x (p iq ) x (p iq ) x z x z x z x z Ec(x ) we have also.
The variance of the real part of the random variables x j is defined by: , which is the ordinary variance definition that we know.The variance of the imaginary part of the random variables j x is defined by: Similarly, the variance of the complex probability of the random variables j x is defined by:

JMSS
Therefore, we can directly see from Equations 1 and 2 above that: As it was proven in the general case of a probability distribution with N random variables.Cheney and Kincaid (2004); Deitel and Deitel (2003); Gentle (2003); Gerald and Wheatley (1999); Liu (2001) and Christian and Casella (2005) numerical simulations verify what has been found earlier.We will use Monte Carlo simulation method with the help of the programming language C++ with its predefined pseudorandom function rand() that generates random numbers with a uniform distribution.Table 1-3, are simulations of a Bernoulli distribution where the complex random vectors are chosen randomly by C++.Table 4-6, are simulations of a uniform distribution with three random variables having their complex random vectors also chosen randomly by C++.Table 7 is a simulation that confirms the direct relation between the resultant complex vector Z and the law of large numbers.

CONCLUSION
In this study I have elaborated the new field of "Complex Statistics" which is an original paradigm that was initiated in my first paper on the expansion of Kolmogorov's system of axioms.I have defined in this study a new powerful tool which is the concept of the complex random vector that is a vector representing the real and the imaginary probabilities of an outcome, identified in the added axioms as being the term z = P r +P m .Then I have defined and expressed the resultant complex random vector as the vector which is the sum of all the complex random vectors and representing the whole distribution and system in the complex space C. I have illustrated this methodology by considering a Bernoulli distribution, then a discrete distribution with N random variables as a general case.Afterward, I have determined the characteristics (expectation and variance) of discrete distributions corresponding to the imaginary probabilities and to the complex random vectors.Thus, I have showed that there is a correspondence among the real, imaginary and complex expectations as well among the real, imaginary and complex variances for any Bernoulli distribution as well for any probability distribution.Moreover, I have proven that there is a direct relation between the concept of the resultant complex vector and the very well known law of large numbers.Using this new concept and tool, I have succeeded to demonstrate the law of large numbers in a new way.Additional development of this new complex paradigm will be done in subsequent work.Hence, the first and second papers on complex probabilities written after extending Kolmogorov's axioms establish so far a new field in mathematics which can be called verily: "Complex Statistics".

NOMENCLATURE
We got 0.5 from the study done above of a probability distribution with two random variables.

Fig. 2 .Fig. 3 .JMSSFig. 4 .
Fig. 2. The degree of our knowledge, the chaotic factor and the Pc of Z, (1≤N≤40) set of numbers = the real set R + the imaginary set M P r = Probability in the real set R P m = Probability in the imaginary set M corresponding to the real probability in the set R Pc = Probability of an event A in R with its associated event in M = Probability in the complex set C = always 1 z = complex number = sum of P r and P m = complex random vector |z| 2 = The degree of our knowledge of the random experiment, it is the square of the norm of z.Chf = The chaotic factor of z number where i 2 = -1 E r = Expectation in the real set R E m = Expectation in the imaginary set M Ec = Expectation in the complex set C V r = Variance in the real set R V m = Variance in the imaginary set M Vc = Variance in the complex set C

Table 1 .
Computation of Pc for different values of z 1 and z 2 which are the complex random vectors of a Bernoulli distribution and which are chosen at random.In this case, the resultant complex random vector is Z = z 1 +z 2 and is always equal to 1+i.The corresponding probability of Z in C is always 1, just as expected

Table 2 .
Computation of the real, imaginary and complex expectations for different values of z 1 and z 2 which are chosen at random and the verification that we have always Ec(x) = E r (x)+E m (x)

Table 3 .
Computation of the real, imaginary and complex variances for different values of z 1 and z 2 which are chosen at random and the verification that we have always Vc

Table 4 .
Computation of Pc for different values of z 1 , z 2 , z 3 which are the complex random vectors of the distribution and which are chosen at random.In this case, the resultant complex random vector is Z = z 1 +z 2 +z 3 and is always equal to 1+2i.

Table 5 .
Computation of the real, imaginary and complex expectations for different values of z 1 , z 2 , z 3 which are chosen at random and the verification that we have always Ec (x) = E r (x)+E m (x)

Table 6 .
Computation of the real, imaginary and complex variances for different values of z 1 , z 2 , z 3 which are chosen at random and

Table 7 .
The resultant complex random vector Z = z 1 +z 2 +…z j +…+z N , with 1≤j≤N and the verification of the law of large numbers