Persian Curve Versus Monte Carlo Method

Corresponding Author: Abdolrasoul Ranjbaran Department of Civil and Environmental Engineering, Shiraz University, Iran E-mail: ranjbarn@shirazu.ac.ir aranjbaran@yahoo.com Abstract: The Monte Carlo Simulation method is a powerful tool for solving problems including random variables. The basic idea to implement a Monte Carlo simulation is to first generate samples of random inputs from their assumed distribution functions and then perform a deterministic calculation on the generated random inputs, based on mathematical modeling of the system, to obtain output results. An early version of Monte Carlo simulation is the famous needle experiment. The idea of random experiment, have been used for solving many complex problems. Simulation based approaches have some disadvantages. Its implementation needs a massive use of computational resource and long calculation times. Moreover, providing linkage between input to the system and its output is difficult. Toward remedy, the phenomenon is considered as the change in the state of the system. Via logical reasoning, concise mathematics and using real world data, the output is related to the input via the Persian Curve. The Persian Curve provided a simple, cheap and exact solution to the problem. Consequently the Persian Curve is proposed as a replacement for the Monte Carlo Simulation. The validity of the work is verified via concise mathematics and comparison of the results with those of the others.


Introduction
The Monte Carlo Simulation (MCS) is a well-known statistical method which is used for analysis of phenomena with no clear deterministic solution, Fig. 1. Despite of its success in different field of human knowledge, the (MCS) is an expensive, time consuming and uncertain method (Badger, 1994;Veach, 1998;   The work is continued with verification via analysis of fragility curve for structures, conclusions and list of references. In this study a phenomenon is considered as change in the state of a system. The state variable (  [0,1]) is defined as the lifetime or identification parameter of systems. Every system is expressed by its specific (). Consequently all derivations are finally expressed in terms of the ().

The Monte Carlo Simulation
The Monte Carlo Simulation is a powerful tool for analysis of complex systems via including random variables (Badger, 1994;Veach, 1998;Laosiritaworn, 2002;Lefebvre, 2007;Murray, 2007;Manohar, 2009;Wijesinghe, 2011;Du, 2012;Bolin, 2013;Cook et al., 2013;Parkinson, 2013;Pollock, 2013;Romano, 2013;Zio, 2013;Goerdin, 2014;Poole, 2014;Rawlinson, 2015;Hahn, 2015;Wang, 2015;Hochuli, 2016;Sánchez, 2016;Zhao, 2016;Fadele, 2017;Feng, 2017;Mouawad, 2017;Schwarm, 2017a;2017b;Haqiqat and Müller, 2018;Hou, 2018;Laengen, 2018;Albes, 2019;Huda, 2018;Pakyuz-Charrier, 2018;Unwin, 2018;Wang, 2018;Webster, 2019;Zhang, 2019;Mazhdrakov et al., 2018;Corbella, 2019;de Freitas, 2019;Berg, 2019;Alamri, 2020;Apostolopoulou, 2019;Cumberworth, 2021;Ead, 2020;Ketron, 2020;Cosgrove, 2020;Dash, 2020;Debrot, 2020;Diniz, 2020;Eagle, 2020;Guijarro Gámez, 2020;Nilakanta, 2020;Welding, 2020;Wang, 2021;Sheridan-Methven, 2020). The basic idea to implement a Monte Carlo Simulation is to first generate samples of random inputs from their assumed distribution functions. Then perform a deterministic calculation on the generated random inputs, based on mathematical modeling of the system, to obtain numerical results. An early version of Monte Carlo Simulation is the famous needle experiment, performed by the French mathematician Comte de Buffon (1707-1788. Consider a plane with parallel lines distanced at (d) and a needle, with a length (L<d) that is randomly positioned on the plane. Note that: (1) The shortest distance from the needle center to the line (x), is uniformly distributed over (0, d/2). (2) The angle between the needle and a line () also follows a uniform distribution over (0, /2). (3) the needle crosses a line when (x  L sin /2). The two random variables (x and ) are independent. Therefore, their joint probability density function is (f(x,) = 4/d). Let (A) denote the event that the needle lies across a line. The probability of (P(A) = 2L/d) is determined in Eq. (1) as follows: Mathematics real world data logical reasoning PC () 236 Buffon verified this probability by eventually throwing a needle on the plane with parallel lines. This experiment, reflects the basic idea of Monte Carlo Simulation. That is, performing the experiment (n) times, if the needle crosses a line (m) times, then the probability of (A) is approximated in Eq. (2): As marked by Laplace in 1812, one can estimate the value of () by conducting the Buffon's needle experiment and setting Eq. (1) equal to Eq. (2), in Eq. (3): In 1901, the Italian mathematician Lazzarini (Badger, 1994) This experiment offered such an impression that one can estimate a probability of an event or a random quantity via random simulation. In 1946, the physicist from Los Alamos were working on the distance likely to be traveled by the neutron in different materials under the Manhattan Project. They were unable to solve the problem using conventional deterministic mathematical methods. Then Stainislaw Ulam proposed an idea of using random experiments. This idea was subsequently developed by von Neumann, Metropolis and others to solve many complex problems in making the atomic bomb. Since the work was secret, the random experiment method required a code name. Metropolis suggested the name of Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle borrow money from relatives to gamble (Zhang, 2019). More scientific basis of the Monte Carlo simulation, is the strong law of numbers, which guarantees that the average of a set of independent and identically distributed random variables converges to the mean value with probability 1.
The strong law of large numbers is expressed as follows. For a sequence of statistically (n) independent and identically distributed random variables (Xi, i = 1,n) with a mean value of (), then the average of the variables converges to () with probability 1, as in Eq. (5): Monte Carlo simulation model is shown in Fig. 1. According to the strong law of large numbers, the average converges to certain value as (n  ). Equation (5), for a system with the characteristics function (h(, x)) and the random system output (x = (x1,…, xn)) with the probability distribution (fo(x)) is expressed in Eq. (6) for the system (): The idea of random experiment, have been used for solving many complex problems (Badger, 1994; Veach, approaches have some disadvantages. Its implementation needs a massive use of computational resource and long calculation times. Moreover, providing linkage between input to the system and its output via a closed form solution is difficult. In view of Eq. (6) the aim is determination of the (PMC), which is only a function of ()! There is no sign of the method of solution in it. It appears that using the (MCS) for this job is the worst method that could be selected. Despite of this much of efforts, since the infinity is out of reach then the solution by the (MCS) always contains epistemic uncertainty which is added to the aleatory one. Toward remedy, the Abdolrasoul Ranjbaran Team (ART), conducted an extensive research toward analysis of change in the systems, called the Change of State Philosophy (CSP). The result of her research is the Persian Curve (PC) which is expressed in the next section. As will be observed in the next section, the (ART) investigations, via logical reasoning, concluded directly to the Persian Curve (PC()), which clearly can be used in place of (PMC())! As will be seen, the Persian Curve is equal to the expected value of the system output (obtained by stochastics methods including the (MCS) for n  ) with probability 1. All methods in human knowledge are approximate (since n << ) as compared to the Persian Curve.

The Persian Curve Basics
For a change in a system (e.g., structure) the survived capacity and the fled capacity are important (Ranjbaran et al., 2008;Ranjbaran and Rousta, 2009;Ranjbaran, 2010;Ranjbaran et al., 2011;Ranjbaran, 2012; 237 Ranjbaran and Ranjbaran, 2014;Ranjbaran, 2015;Ranjbaran and Ranjbaran, 2016;2017a;2017b;2017c;Ranjbaran et al., 2020a-f;2021a;2021b). The capacity (kS), called system stiffness, is defined as a positive entity which has a finite specified value at the beginning of the lifetime (origin) and reduces to zero at the end of systems lifetime (destination). The inverse of the system stiffness is called system flexibility (fS). The concept of stiffness and flexibility are used for better understanding! They are general and are not necessarily those used in the structural mechanics! For a system with a given lifetime, the survived stiffness (kSS = kS-kC) and the survived flexibility (fSS = fS+fC) are obviously inverse of each other, where (kC) is the change stiffness and (fC) is the change flexibility. This obvious relation is defined in Eq. (7) and shown in Fig. 3 and is used for reliable analysis of changing systems as follows: Note that although the (kS and fS) are known at the origin, the (kC and fC) at each point along the lifetime should be determined. This is done as follows.
There is only one Eq. (7) at hand for solution. Therefore, for the time being, let the (kS, fS and fC) as known and solve Eq.
Note that the phenomenon functions are ratios in a unit interval. Since construction of functionals in terms of two functions is not possible, then the phenomenon functions are customized to the state functions as follows.
For ( In view of Eq. (10), the state functions are defined as ratios in unit intervals in Eq. (11): Eq. (11) may be rewritten as boundary value problems in Eq. (12), in which (min and max) denote (minimum and maximum) respectively: At this stage, attention is paid to enhance the construction of the phenomenon functions. Via the definition of the (kSS and fSS) in Eq. (7) and concept of crack compliance (fC) in fracture mechanics (Anderson, 2017), the authors detected a fact that, the (fC) is directly proportional to the (kS). This detection is called, the Persian Principle of Change (PPC). In view of the (PPC) the (fC) is innovatively defined in Eq. (16): The (kS) can not be directly determined from the real world data (e.g., reliable test results). Toward better definition and providing the condition for using real world data, Eq. (17) is rewritten in Eq. (18) in terms of two positive control parameters (aM) and (b) (Ranjbaran et al., 2020b). Flexibility for translation and rotation of phenomenon functions in the (1  1) working space, which lets the experts to enforce their will, is provided by the form of phenomenon functions in Eq. (18) and selection of two control parameters from reliable real world data: To this end the system identification flag (state variable) and the mathematical basis for determination of the capacity is proposed, separately, in abstract form. Consequently the work is certain and universal, in a sense that it is independent of geometry, size, material property and etc. Therefore, it equally applies to all natural phenomena as change in systems. The system identification flag and the basic mathematical formulation are connected via the Persian Curve as follows.

Persian Curve Birth
The basic formulation for determination of capacity ratio is derived independent of real world data. In order to be able to apply the proposed formulation to natural phenomena (change in real systems), it must be connected to the system identification flag. This is done via construction of Persian Curve as follows. The phenomenon functions are mapped onto the real world data. The ratios at the beginning (PO) and at the termination (PT) of the phenomenon is selected from the reliable real world data, Fig. 6. For real natural phenomenon the (SR) is renamed as Shiraz curve (PS) and the (FR) is renamed as Fasa curve (PF) and collection of the two is called Persian Curve (PC) as defined in Eq. (19). Note that Eq. (19) for (PO = 1 and PT = 0) and (PO = 0 and PT = 1) converges to (SR) and (FR) in Eq. (18) respectively: In comply with common practice in the literature, the derivative of phenomenon functions with respect to () is Note that, the (PZ) corresponds to the Probability Density Function (PDF) in the literature. To this end the formulation is complete. What remained is determination of the control parameters (aM and b) for a real world data. This is done as follows. To prepare for simple and user friendly analysis, Eq. (19) is rewritten in Eq. (21), in which (PC) is coordinate of a point on real world data along the lifetime: The aim is determination of (aM and b) from nonlinear Eq.

Comparison of the Persian Curve and the Monte Carlo Simulation
Comparison of Eq. (6) and Eq. (19), that is the Monte Carlo curve (PMC) and the Persian Curve (PC), shown that both are system specific functions. Albeit the former is expensive and contains epistemic uncertainty, while the latter is cheap, simple and free of epistemic uncertainty. The idea behind the two are shown 240 graphically in Fig. 1 and 2. respectively. More comparison of the results are included as follows.
The Persian curve is defined in terms of an abstract lifetime in a unit interval (  [0, 1]). In real phenomena the lifetime is selected in a truncated interval (  [O, T]), where (O) is the origin lifetime and (T) is the termination lifetime. In order to use the Persian Curve for real world data, the () should be mapped onto () as in Eq. (27): In comply with the literature, the results of the Persian Curve is compared with that of the Monte Carlo Method in the following example.
Example 1: Iervolino et al. (2016;Baltzopoulos et al., 2017), developed a fragility curve for a six-story reinforced concrete moment resisting Frame (BF), via using the SPO2FRAG software. Compare the (BF) with the Persian-Fasa-curve as the proposed fragility function. Note that the SPO2FRAG is developed based on the Monte Carlo Method.
Solution: Via hundreds of thousands of expensive nonlinear incremental dynamic analyses, the required data is prepared and the Fragility curve (BF) (a log-normal curve fitted on the data) is developed, which is scanned and shown in Fig. 7. The selected Key Points coordinates (KPF) and the corresponding control parameters (aM) and (b) are expressed in Eq. (28) The unified Persian-Fasa-Curve (PFU), the unified Persian-Shiraz-curve (PSU) and the unified Persian-Zahedan-curve (PZU) from Eq. (25) are also shown in Fig. 7. Total agreement of the results in general and at the key points (KPF) in specific, is used as a flag for verification of the validity of the unified fragility curve. This close agreement is a powerful reason for recommending the (PC) as a replacement for the (MCS)

Conclusion
The following conclusions are obtained from the present study:  The Persian Curve is equal to the expected value of output in (MCS) with probability 1  The Persian Curve is an excellent replacement for the Monte Carlo Simulation  The Persian Curve is the best method for analysis of real world data  The problems that were solved by the Monte Carlo Simulation is recommended to be resolved with the Persian Curve  Replacement of the Monte Carlo Simulation by the Persian Curve is a revolution in human knowledge