随机过程 总结

参开资料:
http://www.zhihu.com/question/23527615

随机过程一般理论
– 概率论、随机过程的测度论基础:probability space、convergence theory、limit theory、martingale theory
– Markov process
– stochastic integral
– stochastic differential equations
– semimartingale theory

数学金融:
– stochastic integrals
– stochastic differential equations (SDE)
– semimartingale
   – Ito process
   – Levy process: 解决ruin问题
– Brownian motion

Average Age of Renewal Process

1. Definition

  • Let S_n be the n-th renewal
  • Then S_{N(t)} = time of the last renewal (prior to time t)
  • Let A(t) = t - S_{N(t)}
    • A(t) is called the age of the renewal process
    • Interpretation: A(t) is the time since last renewal 

2. Average Age of Renewal Process

  • What is R_n? R_n = int^{S_n}_{S_{n-1}} A(t) dt
  • R_n is the area under the curve for one cycle
  • so E(R_n)) = E[(X_n)^2/2] = E[X^2_n]/2
  • Thus the long-run average of the process is 
    • frac{E[R_n]}{E[X_n]} = frac{E[X^2_n]}{2E[X_n]}

Renewal-Reward Process

1. Definition

  • let N(t) be a renewal process
  • Let R_n = reward earned at n-th renewal
  • Assume R_n are i.i.d, but can depend on X_n (length of the n-th cycle)
Then R(t) = sum^{N(t)}_{n=1} R_n is a renewal reward process.
Intuitive explanation: R(t) = cumulative reward earned up to time t

2. Renewal Reward Theorem 

Proposition 7.3 
  • lim_{t to infty} frac{R(t)}{t} = frac{E[R_n]}{E[X_n]}
  • lim_{t to infty} frac{E[R(t)]}{t} = frac{E[R_n]}{E[X_n]}
Provided E[R_n] < infty, E[X_n] < infty

3. Example: Inventory Problem

Problem: 



Solution:


Reversible Markov Chain

1. Definition

Let X_1, X_2, cdots be a Markov chain with transition probabilities P_{ij}.

P_{ij} = Pr{X_{n+1} = j| X_n = i}

E.g. A sample realization is 1, 2, 3, 4, 5

Let X_n, x_{n-1}, cdots be the same sequence in reverse, i.e. cdots, 5, 4, 3, 2, 1, cdots

Let Q_{ij} be the transition probabilities of the reversed process. That is
Q_{ij} = Pr{X_{n-1} = j |X_n = i}
              = frac{Pr{X_{n-1} =j, X_n = i}}{Pr{X_n = i}}
              = frac{Pr{X_{n-1} = j} Pr{X_n =i | X_{n-1} = j}}{Pr{X_n = i}}
              = frac{Pr{X_{n-1} = j} P_{ji}}{Pr{X_n = i}}

2. Conclusion

In the steady state, assuming the limiting probabilities exist, we have
Q_{ij} = frac{pi_j P_{ji}}{pi_i}  or pi_i Q_{ij} = pi_j Q_{ji}

The above equation is saying that the rate of transmissions from i to j in the reversed chain is equal to the rate of transitions from j to i in the forward chain.

A DTMC is time reversible if Q_{ij} = P_{ij}

Markov Chain — Branching Process

– Stochastic Course Notes

1. Definition

Consider a population

  • Each individual produces j new offspring each period with probability p_j, j geq 0. Key assumption: This distribution is constant over time.
  • Assume that p_j < q for all j (i.e., the problem is not deterministic)
  • Let X_n be the size of the population at period n

2. Number of Population in the n-th Generation

3. Probability of Die Out

Analysis

Example

4. Example: Family Genetics

Problem:

Solution: 


Markov Chain – Classifications of States

1. Definition of States

  • Communicate: State i and j communicate (i leftrightarrow j) if i is reachable from j and j is reachable from i. (Note: a state i always communicates with iteself)
  • Irreducible: a Markov chains is irreducible if all states are in the same communication class.
  • Absorbing state: p_(ii) = 1
  • Closed Set: a set of states S is a closed set if no state outside of S is reachable from any state in S. (This is like absorbing set of states)
  • f_i: be the probability that, starting in state i, the process returns (at some point) to the sate i
  • Transient: a state is transient if f_i < 1
  • Recurrent: a state that is not transient is recurrent, i.e., f_i =1. There are two types of reurrent states
    • Positive recurrent: if the expected time to return to the state if finite
    • Null recurrent: if the expected time to return to the sate i is infinite (this requires an infinite number of states)
  • Periodic: a state is i period if k>1 where k is the smallest number such that all paths leading from state i back to state i has a multiple of k transitions
  • Aperiodic: if it has period k =1
  • Ergodic: a state is ergodic if it is positive recurrent and aperiodic.

2. Example: Gambler’s Ruin

3. Example: Random Walk


Markov Chain (Discrete Time)

1. Definition

Let {X_n, n =0,1,cdots} be a stochastic process, taking on a finite or countable number of values.

X_n is a DTMC if it has the Markov property: Given the present, the future is independent of the past
P{X_{n+1} = j| X_n = j, X_{n-1} = i_{n-1} ,cdots, X_1 = i_1, X_0 = i_0} = Pr{X_{n+1} = j| X_n = i}

We define Pr{X_{n+1} = j| X_n = i} = P_{ij}, since X_n has the stationary transition probabilities, this probability is not depend on n.

Transition probabilities satisfy sum_j p_{ij} = 1

2. n Step transition Probabilities

P^{n+m}_{ij} = sum_k p^m_{kj} p^n_{ik}

Proof:


3. Example: Coin Flips

4. Limiting Probabilities

Theorem: For an irreducible, ergodic Markov Chain, pi_j = lim_{n to infty} P^n_{ij} exists and is independent of the starting state i. Then pi_j is the unique solution of pi_j = sum_i pi_i P_{ij} and sum_j pi_j = 1.

Two interpretation for pi_i

  • The probability of being in state i a long time into the future (large n)
  • The long-run fraction of time in state i
Note: 
  • If Markov Chain is irreducible and ergodic, then interpretation 1 and 2 are equivalent
  • Otherwise, pi_i is still the solution to pi = pi P, but only interpretation 2 is valid.