# 随机过程 总结

http://www.zhihu.com/question/23527615

– 概率论、随机过程的测度论基础：probability space、convergence theory、limit theory、martingale theory
– Markov process
– stochastic integral
– stochastic differential equations
– semimartingale theory

– stochastic integrals
– stochastic differential equations (SDE)
– semimartingale
– Ito process
– Levy process： 解决ruin问题
– Brownian motion

# Average Age of Renewal Process

## 1. Definition

• Let be the n-th renewal
• Then = time of the last renewal (prior to time t)
• Let
• is called the age of the renewal process
• Interpretation: A(t) is the time since last renewal

## 2. Average Age of Renewal Process

• What is ?
• is the area under the curve for one cycle
• so
• Thus the long-run average of the process is

# Renewal-Reward Process

## 1. Definition

• let be a renewal process
• Let = reward earned at -th renewal
• Assume are i.i.d, but can depend on (length of the -th cycle)
Then is a renewal reward process.
Intuitive explanation: = cumulative reward earned up to time t

Proposition 7.3
Provided

Problem:

Solution:

# Reversible Markov Chain

## 1. Definition

Let be a Markov chain with transition probabilities .

E.g. A sample realization is 1, 2, 3, 4, 5

Let be the same sequence in reverse, i.e.

Let be the transition probabilities of the reversed process. That is

=
=
=

## 2. Conclusion

In the steady state, assuming the limiting probabilities exist, we have
or

The above equation is saying that the rate of transmissions from i to j in the reversed chain is equal to the rate of transitions from j to i in the forward chain.

A DTMC is time reversible if

# Markov Chain — Birth-Death Processes

– Stochastic Course Notes

# Markov Chain — Branching Process

– Stochastic Course Notes

## 1. Definition

Consider a population

• Each individual produces j new offspring each period with probability . Key assumption: This distribution is constant over time.
• Assume that for all j (i.e., the problem is not deterministic)
• Let be the size of the population at period n

Analysis
Example

Problem:

Solution:

# Markov Chain – Classifications of States

## 1. Definition of States

• Communicate: State i and j communicate () if i is reachable from j and j is reachable from i. (Note: a state i always communicates with iteself)
• Irreducible: a Markov chains is irreducible if all states are in the same communication class.
• Absorbing state:
• Closed Set: a set of states S is a closed set if no state outside of S is reachable from any state in S. (This is like absorbing set of states)
• : be the probability that, starting in state i, the process returns (at some point) to the sate i
• Transient: a state is transient if
• Recurrent: a state that is not transient is recurrent, i.e., . There are two types of reurrent states
• Positive recurrent: if the expected time to return to the state if finite
• Null recurrent: if the expected time to return to the sate i is infinite (this requires an infinite number of states)
• Periodic: a state is i period if where k is the smallest number such that all paths leading from state i back to state i has a multiple of k transitions
• Aperiodic: if it has period k =1
• Ergodic: a state is ergodic if it is positive recurrent and aperiodic.

# Markov Chain (Continuous Time)

## 1. Definition

t-step transition probability: Let be the probability that the system is in state j in t time units, given the system is in state i now.

=  (by stationarity)

Lemma 6.2

Lemma  6.2 b:

Lemma 6.3:

Proof:

Define

# Markov Chain (Discrete Time)

## 1. Definition

Let be a stochastic process, taking on a finite or countable number of values.

is a DTMC if it has the Markov property: Given the present, the future is independent of the past

We define , since has the stationary transition probabilities, this probability is not depend on n.

Transition probabilities satisfy

Proof:

## 4. Limiting Probabilities

Theorem: For an irreducible, ergodic Markov Chain, exists and is independent of the starting state i. Then is the unique solution of and .

Two interpretation for

• The probability of being in state i a long time into the future (large n)
• The long-run fraction of time in state i
Note:
• If Markov Chain is irreducible and ergodic, then interpretation 1 and 2 are equivalent
• Otherwise, is still the solution to , but only interpretation 2 is valid.