# Expectation–maximization algorithm

## 1 EM算法简介

In statistics, an expectation–maximization (EM) algorithm is an iterative method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables.

https://zh.wikipedia.org/wiki/最大期望算法
EM算法常用于求解GMM（高斯混合模型）的参数：http://aandds.com/blog/gmm.html

## 2 EM算法实例说明

1 中(b)所示EM算法的4个步骤的说明如下：
(1). EM starts with an initial guess of the parameters.
(2). In the E-step, a probability distribution over possible completions is computed using the current parameters. The counts shown in the table are the expected numbers of heads and tails according to this distribution.
(3). In the M-step, new parameters are determined using the current completions.
(4). After several repetitions of the E-step and M-step, the algorithm converges.

$\begin{gathered} \theta_A^9 (1-\theta_A)^{10-9} \simeq 0.004 \\ \theta_B^9 (1-\theta_B)^{10-9} \simeq 0.001 \end{gathered}$

$\begin{gathered} \frac{0.004}{0.004+0.001}=0.8 \\ \frac{0.001}{0.004+0.001}=0.2 \end{gathered}$

### 2.1 EM算法代码实现

#!/usr/bin/env python
import numpy as np
import math

#### E-M Coin Toss Example as given in the EM tutorial paper by Do and Batzoglou* ####

def get_mn_log_likelihood(obs,probs):
""" Return the (log)likelihood of obs, given the probs"""
# Multinomial Distribution Log PMF
# ln (pdf)      =             multinomial coeff            *   product of probabilities
# ln[f(x|n, p)] = [ln(n!) - (ln(x1!)+ln(x2!)+...+ln(xk!))] + [x1*ln(p1)+x2*ln(p2)+...+xk*ln(pk)]

multinomial_coeff_denom= 0
prod_probs = 0
for x in range(0,len(obs)): # loop through state counts in each observation
multinomial_coeff_denom = multinomial_coeff_denom + math.log(math.factorial(obs[x]))
prod_probs = prod_probs + obs[x]*math.log(probs[x])

multinomial_coeff = math.log(math.factorial(sum(obs))) -  multinomial_coeff_denom
likelihood = multinomial_coeff + prod_probs
return likelihood

# 1st:  Coin B, {HTTTHHTHTH}, 5H,5T
# 2nd:  Coin A, {HHHHTHHHHH}, 9H,1T
# 3rd:  Coin A, {HTHHHHHTHH}, 8H,2T
# 4th:  Coin B, {HTHTTTHHTT}, 4H,6T
# 5th:  Coin A, {THHHTHHHTH}, 7H,3T

# represent the experiments

# E-M begins!
delta = 0.001
j = 0 # iteration counter
improvement = float('inf')
while (improvement>delta):
expectation_A = np.zeros((5,2), dtype=float)
expectation_B = np.zeros((5,2), dtype=float)
for i in range(0,len(experiments)):
e = experiments[i] # i'th experiment

weightA = math.exp(ll_A) / ( math.exp(ll_A) + math.exp(ll_B) ) # corresponding weight of A proportional to likelihood of A
weightB = math.exp(ll_B) / ( math.exp(ll_A) + math.exp(ll_B) ) # corresponding weight of B proportional to likelihood of B

expectation_A[i] = np.dot(weightA, e)
expectation_B[i] = np.dot(weightB, e)

j = j+1



## 3 EM算法和初值相关

EM算法和初值的选择相关。 在前面介绍的例子中，如果选择的初值为： $$\theta_A = 0.5, \; \theta_B=0.6$$ ，则收敛后得到的估计值为： $$\theta_A = 0.52, \; \theta_B=0.80$$ 。

## 4 EM算法不一定是全局最优解

EM算法得到的是局部最优解，不一定是全局最优解。

Created: <2015-07-04 Sat 00:00>

Last updated: <2017-12-02 Sat 10:09>

Creator: Emacs 25.3.1 (Org mode 9.1.4)