Pdf of Continuous Exponential Random Variable
Exponential Random Variable
The exponential random variable is defined by the density function [see Fig.1-2b](1.4-5)P(x) = {a exp(–ax), if x≥0,0, if x>0,where a is any positive real number.
From: Markov Processes , 1992
Random Variables, Distributions, and Density Functions
Scott L. Miller , Donald Childers , in Probability and Random Processes, 2004
3.4.2 Exponential Random Variable
The exponential random variable has a probability density function and cumulative distribution function given (for any b > 0) by
(3.19a)
(3.19b)
A plot of the PDF and the CDF of an exponential random variable is shown in Figure 3.9. The parameter b is related to the width of the PDF and the PDF has a peak value of 1/b which occurs at x = 0. The PDF and CDF are nonzero over the semi-infinite interval (0, ∞), which may be either open or closed on the left endpoint.
Exponential random variables are commonly encountered in the study of queueing systems. The time between arrivals of customers at a bank, for example, is commonly modeled as an exponential random variable, as is the duration of voice conversations in a telephone network.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780121726515500038
The Exponential Distribution and the Poisson Process
Sheldon M. Ross , in Introduction to Probability Models (Twelfth Edition), 2019
5.2.1 Definition
A continuous random variable X is said to have an exponential distribution with parameter λ, , if its probability density function is given by
or, equivalently, if its cdf is given by
The mean of the exponential distribution, , is given by
Integrating by parts yields
The moment generating function of the exponential distribution is given by
(5.1)
All the moments of X can now be obtained by differentiating Eq. (5.1). For example,
Consequently,
Example 5.1 Exponential Random Variables and Expected Discounted Returns
Suppose that you are receiving rewards at randomly changing rates continuously throughout time. Let denote the random rate at which you are receiving rewards at time x. For a value , called the discount rate, the quantity
represents the total discounted reward. (In certain applications, α is a continuously compounded interest rate, and R is the present value of the infinite flow of rewards.) Whereas
is the expected total discounted reward, we will show that it is also equal to the expected total reward earned up to an exponentially distributed random time with rate α.
Let T be an exponential random variable with rate α that is independent of all the random variables . We want to argue that
To show this define for each a random variable by
and note that
Thus,
Therefore, the expected total discounted reward is equal to the expected total (undiscounted) reward earned by a random time that is exponentially distributed with a rate equal to the discount factor. ■
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012814346900010X
Operations on a Single Random Variable
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
4.7 Characteristic Functions
In this section, we introduce the concept of a characteristic function. The characteristic function of a random variable is closely related to the Fourier transform of the PDF of that random variable. Thus, the characteristic function provides a sort of "frequency domain" representation of a random variable, although in this context there is no connection between our frequency variable ω and any physical frequency. In studies of deterministic signals, it was found that the use of Fourier transforms greatly simplified many problems, especially those involving convolutions. We will see in future chapters the need for performing convolution operations on PDFs of random variables, and hence frequency domain tools will become quite useful. Furthermore, we will find that characteristic functions have many other uses. For example, the characteristic function is quite useful for finding moments of a random variable. In addition to the characteristic function, two other related functions, namely, the moment-generating function (analogous to the Laplace transform) and the probability-generating function (analogous to the z-transform), will also be studied in the following sections.
Definition 4.7: The characteristic function of a random variable, X, is given by
(4.36)
Note the similarity between this integral and the Fourier transform. In most of the electrical engineering literature, the Fourier transform of the function fX (x) would be Φ(−ω). Given this relationship between the PDF and the characteristic function, it should be clear that one can get the PDF of a random variable from its characteristic function through an inverse Fourier transform operation:
(4.37)
The characteristic functions associated with various random variables can be easily found using tables of commonly used Fourier transforms, but one must be careful since the Fourier integral used in Equation (4.36) may be different from the definition used to generate common tables of Fourier transforms. In addition, various properties of Fourier transforms can also be used to help calculate characteristic functions as shown in the following example.
Example 4.18
An exponential random variable has a PDF given by fX (x) = exp(−x)u(x). Its characteristic function is found to be
This result assumes that ω is a real quantity. Now suppose another random variable Y has a PDF given by fY (y) = a exp(−ay)u(y). Note that fY (y) = afX (ay), thus using the scaling property of Fourier transforms, the characteristic function associated with the random variable Y is given by
assuming a is a positive constant (which it must be for Y to have a valid PDF). Finally, suppose that Z has a PDF given by fZ (z) = a exp(−a(z − b))u(z − b). Since fZ (z) = fY (z − b), the shifting property of Fourier transforms can be used to help find the characteristic function associated with the random variable Z:
The next example demonstrates that the characteristic function can also be computed for discrete random variables. In Section 4.8, the probability-generating function will be introduced which is preferred by some when dealing with discrete random variables.
Example 4.19
A binomial random variable has a PDF which can be expressed as
Its characteristic function is computed as follows:
Since the Gaussian random variable plays such an important role in so many studies, we derive its characteristic function in Example 4.20. We recommend that the student commit the result of this example to memory. The techniques used to arrive at this result are also important and should be carefully understood.
Example 4.20
For a standard normal random variable, the characteristic function can be found as follows:
To evaluate this integral, we complete the square in the exponent.
The integrand in the above expression looks like the properly normalized PDF of a Gaussian random variable, and since the integral is over all values of x, the integral must be unity. However, close examination of the integrand reveals that the "mean" of this Gaussian integrand is complex. It is left to the student to rigorously verify that this integral still evaluates to unity even though the integrand is not truly a Gaussian PDF (since it is a complex function and hence not a PDF at all). The resulting characteristic function is then
For a Gaussian random variable whose mean is not zero or whose standard deviation is not unity (or both), the shifting and scaling properties of Fourier transforms can be used to show that
Theorem 4.3: For any random variable whose characteristic function is differentiable at ω= 0,
(4.38)
Proof: The proof follows directly from the fact that the expectation and differentiation operations are both linear and consequently the order of these operations can be exchanged.
Multiplying both sides by −j and evaluating at ω = 0 produces the desired result.
Theorem 4.3 demonstrates a very powerful use of the characteristic function. Once the characteristic function of a random variable has been found, it is generally a very straightforward thing to produce the mean of the random variable. Furthermore, by taking the kth derivative of the characteristic function and evaluating at ω = 0, an expression proportional to the kth moment of the random variable is produced. In particular,
(4.39)
Hence, the characteristic function represents a convenient tool to easily determine the moments of a random variable.
Example 4.21
Consider the exponential random variable of Example 4.18 where fY (y) = a exp(−ay)u(y). The characteristic function was found to be
The derivative of the characteristic function is
and thus the first moment of Y is
For this example, it is not difficult to show that the kth derivative of the characteristic function is
and from this, the kth moment of the random variable is found to be
For random variables that have a more complicated characteristic function, evaluating the kth derivative in general may not be an easy task. However, Equation (4.39) only calls for the kth derivative evaluated at a single point (ω = 0), which can be extracted from the Taylor series expansion of the characteristic function. To see this, note that from Taylor's theorem, the characteristic function can be expanded in a power series as
(4.40)
If one can obtain a power series expansion of the characteristic function, then the required derivatives are proportional to the coefficients of the power series. Specifically, suppose an expansion of the form
is obtained. Then the derivatives of the characteristic function are given by
(4.42)
The moments of the random variable are then given by
This procedure is illustrated using a Gaussian random variable in the next example.
Example 4.22
Consider a Gaussian random variable with a mean of μ = 0 and variance σ2. Using the result of Example 4.20, the characteristic function is ΦX(ω) = exp(−ω2σ2/2). Using the well-known Taylor series expansion of the exponential function, the characteristic function is expressed as
The coefficients of the general power series as expressed in Equation (4.41) are given by
Hence, the moments of the zero-mean Gaussian random variable are
As expected, E[X 0] = 1, E[X] = 0 (since it was specified that μ = 0), and E[X 2] = σ2 (since in the case of zero-mean variables, the second moment and variance are one and the same). Now, we also see that E[X 3] = 0 (as are all odd moments), E[X 4] = 3σ4, E[X 6] = 15σ6, and so on. We can also conclude from this that for Gaussian random variables, the coefficient of skewness is c s = 0 while the coefficient of kurtosis is c k = 3.
In many cases of interest, the characteristic function has an exponential form. The Gaussian random variable is a typical example. In such cases, it is convenient to deal with the natural logarithm of the characteristic function.
Definition 4.8: In general, we can write a series expansion of ln[Φ X (ω)] as
(4.44)
where the coefficients, λ n , are called the cumulants and are given as
(4.45)
The cumulants are related to the moments of the random variable. By taking the derivatives specified in Equation (4.45) we obtain
(4.48)
Thus, λ1 is the mean, λ2 is the second central moment (or the variance), and λ3 is the third central moment. However, higher-order cumulants are not as simply related to the central moments.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500072
Operations on a Single Random Variable
Scott L. Miller , Donald Childers , in Probability and Random Processes, 2004
4.7 Characteristic Functions
In this section we introduce the concept of a characteristic function. The characteristic function of a random variable is closely related to the Fourier transform of the PDF of that random variable. Thus, the characteristic function provides a sort of "frequency domain" representation of a random variable, although in this context there is no connection between our frequency variable ω and any physical frequency. In studies of deterministic signals, it was found that the use of Fourier transforms greatly simplified many problems, especially those involving convolutions. We will see in future chapters the need for performing convolution operations on PDFs of random variables and hence frequency domain tools will become quite useful. Furthermore, we will find that characteristic functions have many other uses. For example, the characteristic function is quite useful for finding moments of a random variable. In addition to the characteristic function, two other related functions, namely, the moment-generating function (analogous to the Laplace transform) and the probability-generating function (analogous to the z-transform), will also be studied in the following sections.
DEFINITION 4.7: The characteristic function of a random variable, X, is given by
(4.35)
Note the similarity between this integral and the Fourier transform. In most of the electrical engineering literature, the Fourier transform of the function fX(x) would be Φ(– ω). Given this relationship between the PDF and the characteristic function, it should be clear that one can get the PDF of a random variable from its characteristic function through an inverse Fourier transform operation:
(4.36)
The characteristic functions associated with various random variables can be easily found using tables of commonly used Fourier transforms, but one must be careful since the Fourier integral used in Equation 4.35 may be different from the definition used to generate common tables of Fourier transforms. In addition, various properties of Fourier transforms can also be used to help calculate characteristic functions as shown in the following example.
EXAMPLE 4.18: An exponential random variable has a PDF given by fx(x) = exp(–x)u(x). Its characteristic function is found to be
This result assumes that ω is a real quantity. Now suppose another random variable Y has a PDF given by fY (y) = a exp(-ay)u(y). Note that fY (y) = afX (ay), thus using the scaling property of Fourier transforms, the characteristic function associated with the random variable Y is given by
assuming a is a positive constant (which it must be for Y to have a valid PDF). Finally, suppose that Z has a PDF given by fZ (z) = a exp(-a(z - b))u(z - b). Since fZ (z)=fY (z-b), the shifting property of Fourier transforms can be used to help find the characteristic function associated with the random variable Z:
The next example demonstrates that the characteristic function can also be computed for discrete random variables. In Section 4.8, the probability-generating function will be introduced and is preferred by some when dealing with discrete random variables.
EXAMPLE 4.19: A binomial random variable has a PDF that can be expressed as
Its characteristic function is computed as follows:
Since the Gaussian random variable plays such an important role in so many studies, we derive its characteristic function in Example 4.20. We recommend that the student commit the result of this example to memory. The techniques used to arrive at this result are also important and should be carefully studied and understood.
EXAMPLE 4.20: For a standard normal random variable, the characteristic function can be found as follows:
To evaluate this integral, we complete the square in the exponent.
The integrand in this expression looks like the properly normalized PDF of a Gaussian random variable, and since the integral is over all values of x, the integral must be unity. However, close examination of the integrand reveals that the "mean" of this Gaussian integrand is complex. It is left to the student to rigorously verify that this integral still evaluates to unity even though the integrand is not truly a Gaussian PDF (since it is a complex function and hence not a PDF at all). The resulting characteristic function is then
For a Gaussian random variable whose mean is not zero or whose standard deviation is not unity (or both), the shifting and scaling properties of Fourier transforms can be used to show that
THEOREM 4.3: For any random variable whose characteristic function is differentiable at ω = 0,
(4.37)
PROOF: The proof follows directly from the fact that the expectation and differentiation operations are both linear and consequently the order of these operations can be exchanged.
Multiplying both sides by –j and evaluating at ω = 0 produces the desired result.▪
Theorem 4.3 demonstrates a very powerful use of the characteristic function. Once the characteristic function of a random variable has been found, it is generally a very straightforward thing to produce the mean of the random variable. Furthermore, by taking the kth derivative of the characteristic function and evaluating at ω = 0, an expression proportional to the kth moment of the random variable is produced. In particular,
(4.38)
Hence, the characteristic function represents a convenient tool to easily determine the moments of a random variable.
EXAMPLE 4.21: Consider the exponential random variable of Example 4.18 where fY (y)=aexp(-ay)u(y). The characteristic function was found to be
The derivative of the characteristic function is
and thus the first moment of Y is
For this example, it is not difficult to show that the kth derivative of the characteristic function is
and from this, the kth moment of the random variable is found to be
For random variables that have a more complicated characteristic function, evaluating the kth derivative in general may not be an easy task. However, Equation 4.38 only calls for the kth derivative evaluated at a single point (ω = 0), which can be extracted from the Taylor series expansion of the characteristic function. To see this, note that from Taylor's Theorem, the characteristic function can be expanded in a power series as
(4.39)
If one can obtain a power series expansion of the characteristic function, then the required derivatives are proportional to the coefficients of the power series. Specifically, suppose an expansion of the form
(4.40)
is obtained. Then the derivatives of the characteristic function are given by
(4.41)
The moments of the random variable are then given by
(4.42)
This procedure is illustrated using a Gaussian random variable in the next example.
EXAMPLE 4.22: Consider a Gaussian random variable with a mean of μ = 0 and variance σ 2. Using the result of Example 4.20, the characteristic function is ΦX(ω)=exp(-ω2σ2/2). Using the well-known Taylor series expansion of the exponential function, the characteristic function is expressed as
The coefficients of the general power series as expressed in Equation 4.40 are given by
Hence the moments of the zero-mean Gaussian random variable are
As expected, E[X 0] = 1, E[X] = 0 (since it was specified that μ = 0), and E[X2]= σ 2 (since in the case of zero-mean variables, the second moment and variance are one and the same). Now, we also see that E[X 3]= 0 (i.e., all odd moments are equal to zero), E[X 4]=3σ4, E[X6] =15σ6, and so on. We can also conclude from this that for Gaussian random variables, the coefficient of skewness is cs = 0, while the coefficient of kurtosis is ck = 3.
In many cases of interest, the characteristic function has an exponential form. The Gaussian random variable is a typical example. In such cases, it is convenient to deal with the natural logarithm of the characteristic function.
DEFINITION 4.8: In general, we can write a series expansion of ln[Φx(ω)] as
(4.43)
where the coefficients, λn, are called the cumulants and are given as
(4.44)
The cumulants are related to the moments of the random variable. By taking the derivatives specified in Equation 4.44 we obtain
(4.45)
(4.46)
(4.47)
Thus, λ1 is the mean, λ2 is the second central moment (or the variance), and λ3 is the third central moment. However, higher order cumulants are not as simply related to the central moments.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012172651550004X
Random Variables, Distributions, and Density Functions
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
3.4.7 Rayleigh Random Variable
A Rayleigh random variable, like the exponential random variable, has a one-sided PDF. The functional form of the PDF and CDF is given (for any σ > 0) by
(3.28a)
(3.28b)
Plots of these functions are shown in Figure 3.11. The Rayleigh distribution is described by a single parameter, σ 2, which is related to the width of the Rayleigh PDF. In this case, the parameter σ 2 is not to be interpreted as the variance of the Rayleigh random variable. It will be shown later that the Rayleigh distribution arises when studying the magnitude of a complex number whose real and imaginary parts both follow a zero-mean Gaussian distribution. The Rayleigh distribution arises often in the study of noncoherent communication systems and also in the study of wireless communication channels, where the phenomenon known as fading is often modeled using Rayleigh random variables.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500060
Basic Concepts in Probability
Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013
1.8.6 The Exponential Distribution
A continuous random variable X is defined to be an exponential random variable (or X has an exponential distribution) if for some parameter λ>0 its PDF is given by
The CDF, mean, and variance of X, and the s-transform of its PDF are given by
Like the geometric distribution, the exponential distribution possesses the forgetfulness property. Thus, if we consider the occurrence of an event governed by the exponential distribution as an arrival, then given that no arrival has occurred up to time t, the time until the next arrival is exponentially distributed with mean 1/λ. In particular, it can be shown, as in Ibe (2005), that
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124077959000013
Functions of Random Variables
Oliver C. Ibe , in Fundamentals of Applied Probability and Random Processes (Second Edition), 2014
6.4.5 The Spare Parts Problem
From Figure 6.2 we can see that finding the sum of independent random variables is equivalent to finding the lifetime of a system that achieves continuous operation by permitting instantaneous replacement of a component with a spare part at the time of the component's failure. One interesting issue is to find the probability that the life of the system exceeds a given value. For the case where only one spare part is available, we are basically dealing with the sum of two random variables. For the case where we have n − 1 spare parts, we are dealing with the sum of n random variables. For the case of n = 2, we have that if the lifetime of the primary component is X and the lifetime of the spare component is Y, where X and Y are independent, then the lifetime of the component W and its PDF are given by
Thus, if we define the reliability function of the system by R W (w), the probability that the lifetime of the system exceeds the value w 0 is given by
If it is desired that P[W > w 0] ≥ φ, where 0 ≤ φ ≤ 1, then we could be required to find the parameters of X and Y that are necessary to achieve this goal. For example, if X and Y are independent and identically distributed exponential random variables with mean 1/ λ, we can find the smallest mean value of the random variables that can achieve this goal.
For the case of n − 1 spare parts, the lifetime of the system U is given by
where X k is the life time of the kth component. If we assume that the X k are independent, the PDF of U is given by the following n-fold convolution integral:
For the special case when the X k are identically distributed exponential random variables with mean 1/λ, U becomes an nth order Erlang random variable with the PDF, CDF, and reliability function given by
(6.12)
Example 6.12
A system consists of one component whose lifetime is exponentially distributed with a mean of 50 hours. When the component fails, it is immediately replaced by a spare component whose lifetime is independent and identically distributed as that of the original component without the system suffering a downtime.
- a.
-
What is the probability that the system has not failed after 100 hours of operation?
- b.
-
If the mean lifetime of the component and its spare is increased by 10%, how does that affect the probability that the system exceeds a lifetime of 100 hours?
Solution:
Let X be a random variable that denotes the lifetime of the component and let U be the random variable that denotes the lifetime of the system. Then, U is an Erlang-2 random variable whose PDF, CDF, and reliability function are given by
- a.
-
Since 1/λ = 50, we have that
- b.
-
When we increase the mean lifetime of the component by 10%, we obtain 1/λ = 50(1 + 0.1) = 55. Thus, with the new λu = 100/55, the corresponding value of R U (100) is
That is, the probability that the system lifetime exceeds 100 hours increases by approximately 13%.
Example 6.13
The time to failure of a component of a system is exponentially distributed with a mean of 100 hours. If the component fails, it is immediately replaced by an identical spare component whose time to failure is independent of that of the previous one and the system experiences no downtime in the process of component replacement. What is the smallest number of spare parts that must be used to guarantee continuous operation of the system for at least 300 hours with a probability of at least 0.95?
Solution:
Let X be the random variable that denotes the lifetime of a component, and let the number of spare parts be n − 1. Let U be the random variable that denotes the lifetime of the system. Then U = X 1 + X 2 + ⋯ + X n , which is an Erlang-n random variable whose reliability function is given by
Since 1/λ = 100, we have that
The following table shows the values of R U (300) for different values of n:
n - 1 | R U (300) |
---|---|
0 | 0.0498 |
1 | 0.1991 |
2 | 0.4232 |
3 | 0.6472 |
4 | 0.8153 |
5 | 0.9161 |
6 | 0.9665 |
Thus, we see that with n − 1 = 5 we cannot provide the required probability of operation, while with n − 1 = 6 we can. This means that we need 6 spare components to achieve the goal.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128008522000067
Simulation Techniques
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
12.1.3 Generation of Random Numbers from a Specified Distribution
Quite often, we are interested in generating random variables that obey some distribution other than a uniform distribution. In this case, it is generally a fairly simple task to transform a uniform random number generator into one that follows some other distribution. Consider forming a monotonic increasing transformation g() on a random variable X to form a new random variable Y. From the results of Chapter 4, the PDFs of the random variables involved are related by
Given an arbitrary PDF, fX (x), the transformation Y = g(X) will produce a uniform random variable Y if dg/dx = fX (x) or equivalently g(x) = FX (x). Viewing this result in reverse, if X is uniformly distributed over (0, 1) and we want to create a new random variable, Y with a specified distribution, FY (y), the transformation Y = Fy −1 (X) will do the job.
Example 12.3
Suppose we want to transform a uniform random variable into an exponential random variable with a PDF of the form
The corresponding CDF is
Therefore, to transform a uniform random variable into an exponential random variable, we can use the transformation
Note that if X is uniformly distributed over (0, 1), then 1 − X will be uniformly distributed as well so that the slightly simpler transformation
will also work.
This approach for generation of random variables works well provided that the CDF of the desired distribution is invertible. One notable exception where this approach will be difficult is the Gaussian random variable. Suppose, for example, we wanted to transform a uniform random variable, X, into a standard normal random variable, Y. The CDF in this case is the complement of a Q-function, FY (y) = 1 − Q(y). The inverse of this function would then provide the appropriate transformation, y = Q −1 (1 − x), or as with the previous example, we could simplify this to y = Q− 1 (x). The problem here lies with the inverse Q-function which can not be expressed in a closed form. One could devise efficient numerical routines to compute the inverse Q-function, but fortunately there is an easier approach.
An efficient method to generate Gaussian random variables from uniform random variables is based on the following 2 × 2 transformation. Let X 1 and X 2 be two independent uniform random variables (over the interval (0, 1)). Then if two new random variables, Y1 and Y2 are created according to
12.5a
12.5b
then Y 1 and Y 2 will be independent standard normal random variables (see Example 5.24). This famous result is known as the Box−Muller transformation and is commonly used to generate Gaussian random variables. If a pair of Gaussian random variables is not needed, one of the two can be discarded. This method is particularly convenient for generating complex Gaussian random variables since it naturally generates pairs of independent Gaussian random variables. Note that if Gaussian random variables are needed with different means or variances, this can easily be accomplished through an appropriate linear transformation. That is, if Y ∼ N(0, 1), then Z = σY + μ will produce Z ∼ N(μ, σ 2).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500151
Simulation
Sheldon M. Ross , in Introduction to Probability Models (Tenth Edition), 2010
Method 1. Sampling a Poisson Process
To simulate the first T time units of a nonhomogeneous Poisson process with intensity function λ(t), let λ be such that
Now, as shown in Chapter 5, such a nonhomogeneous Poisson process can be generated by a random selection of the event times of a Poisson process having rate λ. That is, if an event of a Poisson process with rate λ that occurs at time t is counted (independently of what has transpired previously) with probability λ(t)/λ then the process of counted events is a nonhomogeneous Poisson process with intensity function λ(t),0 ≤ t ≤ T. Hence, by simulating a Poisson process and then randomly counting its events, we can generate the desired nonhomogeneous Poisson process. We thus have the following procedure:
Generate independent random variables X 1, U 1, X 2, U 2, … where the Xi are exponential with rate λ and the Ui are random numbers, stopping at
Now let, for j = 1,…,N − 1,
and set
Thus, the counting process having events at the set of times constitutes the desired process.
The foregoing procedure, referred to as the thinning algorithm (because it "thins" the homogeneous Poisson points) will clearly be most efficient, in the sense of having the fewest number of rejected event times, when λ(t) is near λ throughout the interval. Thus, an obvious improvement is to break up the interval into subintervals and then use the procedure over each subinterval. That is, determine appropriate values k, 0 < t 1 < t 2 < … < tk < T, λ1, … λ k+l, such that
(11.9)
Now simulate the nonhomogeneous Poisson process over the interval (ti − 1, ti ) by generating exponential random variables with rate λ i and accepting the generated event occurring at time s, s ∈ (ti − 1, ti ), with probability λ(s)/λ i . Because of the memoryless property of the exponential and the fact that the rate of an exponential can be changed upon multiplication by a constant, it follows that there is no loss of efficiency in going from one subinterval to the next. In other words, if we are at t ∈ [ti − 1, ti ) and generate X, an exponential with rate λ i , which is such that t + X > ti then we can use λ i [X − (ti − t)]/λ i+1 as the next exponential with rate λ i + 1. Thus, we have the following algorithm for generating the first t time units of a nonhomogeneous Poisson process with intensity function λ(s) when the relations (11.9) are satisfied. In the algorithm, t will represent the present time and I the present interval (that is, I = i when ti − 1 ≤ t < ti ).
- Step 1:
-
t = 0, I = 1.
- Step 2:
-
Generate an exponential random variable X having rate λ I .
- Step 3:
-
If t λ X < ti , reset t = t + X, generate a random number U, and accept the event time t if U ≤ λ (t)/λ I . Return to step 2.
- Step 4:
-
(Step reached if t + X ≥ tI ). Stop if I = k + 1. Otherwise, reset X = (X = tI + t)λ I /λ I + 1. Also reset t = tI and I = I + 1, and go to step 3.
Suppose now that over some subinterval (ti − 1, ti ) it follows that λ i > 0 where
(11.10)
In such a situation, we should not use the thinning algorithm directly but rather should first simulate a Poisson process with rate λi over the desired interval and then simulate a nonhomogeneous Poisson process with the intensity function when s ∈ (ti − 1, ti ). (The final exponential generated for the Poisson process, which carries one beyond the desired boundary, need not be wasted but can be suitably transformed so as to be reusable.) The superposition (or, merging) of the two processes yields the desired process over the interval. The reason for doing it this way is that it saves the need to generate uniform random variables for a Poisson distributed number, with mean of the event times. For instance, consider the case where
Using the thinning method with λ = 11 would generate an expected number of 11 events each of which would require a random number to determine whether or not to accept it. On the other hand, to generate a Poisson process with rate 10 and then merge it with a generated nonhomogeneous Poisson process with rate λ(s) = s, 0 < s < 1, would yield an equally distributed number of event times but with the expected number needing to be checked to determine acceptance being equal to 1.
Another way to make the simulation of nonhomogeneous Poisson processes more efficient is to make use of superpositions. For instance, consider the process where
A plot of this intensity function is given in Figure 11.3. One way of simulating this process up to time 4 is to first generate a Poisson process with rate 1 over this interval; then generate a Poisson process with rate e − 1 over this interval, accept all events in (1, 3), and only accept an event at time t that is not contained in (1, 3) with probability [λ(t) − 1]/(e − 1); then generate a Poisson process with rate e 2.25 − e over the interval (1, 3), accepting all event times between 1.5 and 2.5 and any event time t outside this interval with probability [λ(t) − e]/(e 2.25 − e). The superposition of these processes is the desired nonhomogeneous Poisson process. In other words, what we have done is to break up λ(t) into the following nonnegative parts:
where
and where the thinning algorithm (with a single interval in each case) was used to simulate the constituent nonhomogeneous processes.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123756862000017
Sampling Distributions
Kandethody M. Ramachandran , Chris P. Tsokos , in Mathematical Statistics with Applications in R (Second Edition), 2015
4A A Method to Obtain Random Samples from Different Distributions
Most of the statistical software packages contain a random number generator that produces approximations to random numbers from the uniform distribution U [0, 1]. To simulate the observation of any other continuous random variables, we can start with uniform random numbers and associate these to the distribution we want to simulate. For example, suppose we wish to simulate an observation from the exponential distribution
First produce the value of y from the uniform distribution. Then solve for x from the equation
So x = [− ln (1 − y )]/0.5 is the corresponding value of the exponential random variable. For instance, if y = 0.67, then x = [− ln (1 − y)]/0.5 = 2.2173. If we wish to simulate a sample from the distribution F from the different values of y obtained from the uniform distribution, the procedure is repeated for each new observation x.
- (a)
-
Simulate 10 observations of a random variable having exponential distribution with mean and standard deviation both equal to 2.
- (b)
-
Select 1500 random samples of size n = 10 measurements from a population with an exponential distribution with mean and standard deviation both equal to 2. Calculate sample mean for each of these 1500 samples and draw a relative frequency histogram. Based on Theorems 4.1.1 and 4.4.1, what can you conclude?
It should be noted that in general, if Y ~ U (0, 1) random variable, then we can show that inwill give an exponential random variable with parameter λ. Uniform random variables could also be used to generate random variables from other distributions. For example, let Ui s be iid U[0, 1] random variables. Then,
and
Of course, these transformations are useful only when v and α are integers. More efficient methods based on Monte Carlo simulations, such as MCMC methods, are discussed in Chapter 13.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124171138000047
Source: https://www.sciencedirect.com/topics/mathematics/exponential-random-variable
0 Response to "Pdf of Continuous Exponential Random Variable"
Post a Comment