机器学习:支持向量机(SVM)

优点:泛化错误率低,计算开销不大,结果易解释
缺点:对参数调节和核函数的选择敏感,原始分类器不加修改仅适用于处理二分类问题

线性可分

  • 给定一个二维数据集,如果可以用一条直线,将数据分成两类,在直线的一边都是一种分类,另一边的都是另一种分类,我们说这个数据集线性可分
  • 该直线称为分隔超平面(Separating Hyperplane),也就是分类的决策边界
  • 同理对 N 维数据集,如果存在可对数据进行分类的 N-1 维超平面,我们说该数据集线性可分
  • 采用这种方式来构建分类器,如果数据点离决策边界越远,那么其最后的预测结果也就越可信

支持向量

  • 数据集可能存在多个可用于分类的超平面
  • 我们希望找到离分隔超平面最近的点,确保它们离分隔面的距离尽可能远
  • 符合这个条件的超平面鲁棒性(Robust)最好,对未知数据的分类准确性高
  • 在这里距离又被称为间隔(Margin)
  • 离分隔超平面最近的那些点就是支持向量(Support Vector)
  • SVM(Support Vector Mechine)就是要最大化支持向量到分隔面的距离

寻找最大间隔

下图以有两个特征的二维数据为例子
  
     
  
离分割超平面最近的点就是支持向量,对于最佳分割超平面而言,两边的支持向量到分割超平面的距离是相等的,并且该距离等于过支持向量且与分割面平行的面到分割面的距离
  
分隔超平面可表示为
  (small W_{1}X_{1} + ... + W_{j}X_{j} + ... + W_{m}X_{m} + b = 0)
  
可表示为
  (small WX + b = 0)  或  (small W^{T}X + b = 0)
  
平行面的 (small W) 系数有相同的比例,所以两个平行面可表示为
  
  (small alpha(W_{1}X_{1} + ... + W_{j}X_{j} + ... + W_{m}X_{m}) + c = 0)
  (small eta(W_{1}X_{1} + ... + W_{j}X_{j} + ... + W_{m}X_{m}) + d = 0)
  
对平面的 (small W)(small b) 系数进行同比例的缩放,表示的依然是同一个平面,所以可改写为
  
  (small W_{1}X_{1} + ... + W_{j}X_{j} + ... + W_{m}X_{m} + c/alpha = 0)
  (small W_{1}X_{1} + ... + W_{j}X_{j} + ... + W_{m}X_{m} + d/eta = 0)
  
由于平面等距,所以有
  
  (small b - c/alpha = d/eta - b = e)
  
进一步改写为
  
  (small W_{1}X_{1} + ... + W_{j}X_{j} + ... + W_{m}X_{m} + b - e = 0)
  (small W_{1}X_{1} + ... + W_{j}X_{j} + ... + W_{m}X_{m} + b + e = 0)
  
三个方程都除以 (small e) 得到
  
  (small (W_{1}X_{1} + ... + W_{j}X_{j} + ... + W_{m}X_{m} + b)/e = 0)
  (small (W_{1}X_{1} + ... + W_{j}X_{j} + ... + W_{m}X_{m} + b)/e = 1)
  (small (W_{1}X_{1} + ... + W_{j}X_{j} + ... + W_{m}X_{m} + b)/e = -1)
  
(small W/e)(small b/e) 改用 (small W)(small b) 表示,最终得到
  
  (small W^{T}X + b = 0)
  (small W^{T}X + b = 1)
  (small W^{T}X + b = -1)
  
之所以取 1 是为了方便计算距离,平行线 (small ax + by + c = 0)(small ax + by + d = 0) 的距离为
  
  (Large frac{|c-d|}{sqrt{a^{2}+b^{2}}})
  
所以这三个面的距离可表示为
  
  (large frac{1}{sqrt{W_{1}^{2}+...+W_{j}^{2}+...+W_{m}^{2}}} = frac{1}{||W||})
    
满足 (small W^{T}X + b > 0) 的分类记为 (small y = 1)
满足 (small W^{T}X + b < 0) 的分类记为 (small y = -1)
  
由于样本数据最近的点在 (small W^{T}X + b = pm 1) 上,样本数据可以进一步表示为
  
满足 (small W^{T}X + b > 1) 的分类记为 (small y = 1)
满足 (small W^{T}X + b < -1) 的分类记为 (small y = -1)
    
  
这样 SVM 的训练过程就是要寻找合适的 (small W)(small b) 以满足
  
  max(large frac{1}{||W||})
  
  s.t.  ( ormalsize y_{i}(W^{T}x_{i} + b) geqslant 1), (small i = 1,2,...,n)  (s.t. 是约束条件的意思)


max(large frac{1}{||W||}) 等价于 min(large frac{1}{2}||W||^{2}) (取为 (small frac{1}{2}) 和平方的目的是为了后面方便求解)


所以优化目标又可以写为
  
  min( ormalsize frac{1}{2}||W||^{2})
  
  s.t.  ( ormalsize y_{i}(W^{T}x_{i} + b) geqslant 1), (small i = 1,2,...,n)


拉格朗日乘数法
  
  给定函数 (small y = f(X)) 和约束条件 (small h_{i}(X) = 0),为求解函数极值,构建拉格朗日函数
  
    (large L = f(X) + sum_{i=1}^{k}alpha_{i}h_{i}(X))
  
  对 (small X)(small alpha_{i}) 求导并令导数等于 0 即
  
    (small abla_{x} L = 0)
    (small abla_{alpha} L = 0)
  
  求出的 (small X) 带入 (small f(X)) 可得到极值
  
  如果约束条件是不等式 (small h_{i}(X) leqslant 0)
  只要满足 KKT 条件(这里不推导了)也可以用拉格朗日数法


于是前面的优化目标的拉格朗日函数为
  
  (large L = frac{1}{2}||W||^{2} + sum_{i=1}^{n}alpha_{i}(1-y_{i}(W^{T}x_{i}+b)))
  
对 $small W $和 (small b) 求导并令导数等于 0
  
  (large frac{partial L}{partial W_{j}} = frac{partial frac{1}{2}({W_{1}^{2}+...+W_{j}^{2}+...+W_{m}^{2}})}{partial W_{j}} - sum_{i=1}^{n}frac{partial alpha_{i} y_{i}W_{j}x_{i}^{j}}{partial W_{j}})
  
     (large = W_{j} - sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{j})
  
     (large = 0)


   (large frac{partial L}{partial b} = -frac{partial sum_{i=1}^{n}alpha_{i}y_{i}b}{partial b})
  
     (large = -sum_{i=1}^{n}alpha_{i}y_{i})
  
     (large = 0)
  
将结果带入 (small L) 可得
  
  (large L = frac{1}{2}||W||^{2} + sum_{i=1}^{n}alpha_{i}(1-y_{i}(W^{T}x_{i}+b)))
  
    (large = frac{1}{2}(W_{1}^{2}+...+W_{m}^{2}) + sum_{i=1}^{n}alpha_{i} - sum_{i=1}^{n}alpha_{i}y_{i}(W^{T}x_{i}) - sum_{i=1}^{n}alpha_{i}y_{i}b)
  
    (large = frac{1}{2}(W_{1}^{2}+...+W_{m}^{2}) + sum_{i=1}^{n}alpha_{i} - sum_{i=1}^{n}alpha_{i}y_{i}(W^{T}x_{i}))
  
    (large = sum_{i=1}^{n}alpha_{i} +sum_{j=1}^{m}frac{1}{2}W_{j}^{2} - sum_{i=1}^{n}alpha_{i}y_{i}(W^{T}x_{i}))
  
    (large = sum_{i=1}^{n}alpha_{i} +sum_{j=1}^{m}frac{1}{2}( sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{j})(sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{j})- sum_{i=1}^{n}alpha_{i}y_{i}sum_{j=1}^{m}(W_{j}x_{i}^{j}))
  
    (large = sum_{i=1}^{n}alpha_{i} +sum_{j=1}^{m}frac{1}{2}(sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{j})(sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{j})- sum_{i=1}^{n}alpha_{i}y_{i}sum_{j=1}^{m}((sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{j})x_{i}^{j}))
  
    (large = sum_{i=1}^{n}alpha_{i} - frac{1}{2}sum_{i=1}^{n}sum_{j=1}^{n}alpha_{i}alpha_{j} y_{i}y_{j}x_{i}^{T}x_{j})
  
  s.t.  (large sum_{i=1}^{n}alpha_{i}y_{i}=0), (small i = 1,2,...,n)
  
  s.t.  (large alpha_{i}geqslant0), (small i = 1,2,...,n)
  
求出 (small alpha) 后,将 (large W_{j} = sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{j}) 代入 (large f(X) = W^{T}X+b) 得到点 (small x) 的值为
  
  (large f(X) = W^{T}X+b)
  
      (large = (sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{1})x^{1} + ... + (sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{j})x^{j} + ... + (sum_{i=1}^{n} alpha_{i} y_{i}x_{n}^{m})x^{m} + b)
  
      (large = sum_{i=1}^{n}alpha_{i}y_{i}(x_{i}^{1}x^{1}+...+x_{i}^{j}x^{j}+...+x_{i}^{m}x^{m}) + b)
  
      (large = sum_{i=1}^{n}alpha_{i}y_{i}(x_{i}^{T}x) + b)
  
  或者
  
  (large f(X) = W^{T}X+b)
  
      (large = (sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{1})x^{1} + ... + (sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{j})x^{j} + ... + (sum_{i=1}^{n} alpha_{i} y_{i}x_{n}^{m})x^{m} + b)
  
      (large = a_{1}y_{1}sum_{j=1}^{m} x_{1}^{j}x^{j} + ... + a_{i}y_{i}sum_{j=1}^{m} x_{i}^{j}x^{j} + ... + a_{n}y_{n}sum_{j=1}^{m} x_{n}^{j}x^{j} + b)
  
      (large = egin{bmatrix}a_{1}y_{1} ... a_{i}y_{i} ... a_{n}y_{n}end{bmatrix} imes(egin{bmatrix}x_{1}^{1}&...& x_{1}^{j}&...&x_{1}^{m}\x_{i}^{1}&...& x_{i}^{j}&...& x_{i}^{m}\x_{n}^{1}&...& x_{n}^{j}&...& x_{n}^{m}end{bmatrix} imes egin{bmatrix}x^{1}\x^{j}\x^{m}end{bmatrix})+b)
  
      (large =(a cdot y)^{T} imes(X imes x^{T}) + b)
  
  其中
    (small a)(small y)(n,1) 矩阵
    (small X)(n,m) 矩阵,是样本数据集
    (small x)(1,m) 矩阵,是要求解的一条数据
  
到此为止都是假设数据线性可分,考虑到数据不一定完全线性可分的时候,可以引入一个常数限制 (small alpha),允许部分数据划分错误,即 (small alpha) 的约束条件变为
  
  s.t.  (large sum_{i=1}^{n}alpha_{i}y_{i}=0), (small i = 1,2,...,n)
  
  s.t.  (large C geqslant alpha_{i}geqslant0), (small i = 1,2,...,n)
  
可以使用通用的二次规划算法求解 (small alpha) 的值,但这会造成很大的开销,为了简化计算,人们提出了很多高效算法,SMO 是一个著名代表

SMO(Sequential Minimal Optimization,序列最小优化)

SMO 将大优化问题分解为多个小优化问题,这些小优化问题往往很容易求解,并且对它们进行顺序求解的结果与将它们作为整体来求解的结果是一致的,而 SMO 算法的求解时间短很多
  
基本思路是先取两个值 (small alpha_{i})(small alpha_{j}) 做变量,然后固定其他 (small alpha) 值,由于只有两个变量所以很容易优化,然后再继续取下一对 (small alpha) 值,照同样的方法优化,就这样不断迭代,直到无法再优化,或者达到最大迭代次数
  
(large sum_{i=1}^{n}alpha_{i}y_{i}=0) 可以得到,当固定一组 (small alpha) 以外的其他值时,有
  
  (large alpha_{i}y_{i} + alpha_{j}y_{j} = e)
其中
  (large e = -sum_{k eq i,j}alpha_{k}y_{k})
  (large e) 为常数
  
(small alpha_{1})(small alpha_{2}) 为例子,因为其他 (small alpha) 值被当做常数,可以将优化目标中除 (small alpha_{1})(small alpha_{2}) 外的项去掉(因为常数无法优化)
  
所以原优化目标
  
  (large L=sum_{i=1}^{n}alpha_{i} - frac{1}{2}sum_{i=1}^{n}sum_{j=1}^{n}alpha_{i}alpha_{j} y_{i}y_{j}x_{i}^{T}x_{j})
  
等价于
  
  (large L= alpha_{1}+alpha_{2} - frac{1}{2}alpha_{1}^{2}y_{1}^{2}x_{1}^{T}x_{1} - frac{1}{2}alpha_{2}^{2}y_{2}^{2}x_{2}^{T}x_{2} - alpha_{1}alpha_{2}y_{1}y_{2}x_{1}^{T}x_{2})
  
     (large - alpha_{1}y_{1}sum_{i=3}^{n}alpha_{i}y_{i}x_{1}^{T}x_{i} - alpha_{2}y_{2}sum_{i=3}^{n}alpha_{i}y_{i}x_{2}^{T}x_{i})
  
   (large = alpha_{1}+alpha_{2} - frac{1}{2}alpha_{1}^{2}x_{1}^{T}x_{1} - frac{1}{2}alpha_{2}^{2}x_{2}^{T}x_{2}- alpha_{1}alpha_{2}y_{1}y_{2}x_{1}^{T}x_{2})
  
     (large - alpha_{1}y_{1}sum_{i=3}^{n}alpha_{i}y_{i}x_{1}^{T}x_{i} - alpha_{2}y_{2}sum_{i=3}^{n}alpha_{i}y_{i}x_{2}^{T}x_{i})
  
再用 (large alpha_{1}=frac{e - alpha_{2}y_{2}}{y_{1}}) 代入得到
  
  (large L = frac{e - alpha_{2}y_{2}}{y_{1}}+alpha_{2}- frac{1}{2}(e - alpha_{2}y_{2})^{2}x_{1}^{T}x_{1} - frac{1}{2}alpha_{2}^{2}x_{2}^{T}x_{2}-(e - alpha_{2}y_{2})alpha_{2}y_{2}x_{1}^{T}x_{2})
  
     (large - (e - alpha_{2}y_{2})sum_{i=3}^{n}alpha_{i}y_{i}x_{1}^{T}x_{i} - alpha_{2}y_{2}sum_{i=3}^{n}alpha_{i}y_{i}x_{2}^{T}x_{i})
  
注意有 (small 1 = y^{2}) 以及 (small y = frac{1}{y}),对 (small alpha_{2}) 求导得到

  (large frac{partial L}{partial alpha_{2}} = -y_{1}y_{2}+ y_{2}^{2} + (e - alpha_{2}y_{2})x_{1}^{T}x_{1}y_{2} - alpha_{2}x_{2}^{T}x_{2}-ey_{2}x_{1}^{T}x_{2}+ 2alpha_{2}y_{2}^{2}x_{1}^{T}x_{2})
  
      (large + y_{2}sum_{i=3}^{n}alpha_{i}y_{i}x_{1}^{T}x_{i} - y_{2}sum_{i=3}^{n}alpha_{i}y_{i}x_{2}^{T}x_{i})
  
     (large = -y_{1}y_{2}+ y_{2}^{2} + ex_{1}^{T}x_{1}y_{2} - alpha_{2}x_{1}^{T}x_{1} - alpha_{2}x_{2}^{T}x_{2}-ey_{2}x_{1}^{T}x_{2}+ 2alpha_{2}x_{1}^{T}x_{2})
  
      (large + y_{2}sum_{i=3}^{n}alpha_{i}y_{i}x_{1}^{T}x_{i} - y_{2}sum_{i=3}^{n}alpha_{i}y_{i}x_{2}^{T}x_{i})

令导数为 0 得到

  (large alpha_{2}(x_{1}^{T}x_{1} + x_{2}^{T}x_{2} - 2x_{1}^{T}x_{2}))
  
  (large = y_{2}(y_{2} - y_{1} + ex_{1}^{T}x_{1} - ex_{1}^{T}x_{2} + sum_{i=3}^{n}alpha_{i}y_{i}x_{1}^{T}x_{i} - sum_{i=3}^{n}alpha_{i}y_{i}x_{2}^{T}x_{i}))
  
  (large = y_{2}(y_{2} - y_{1} + ex_{1}^{T}x_{1} - ex_{1}^{T}x_{2})
  
     (large + (f(x_{1}) - sum_{i=1}^{2}alpha_{i}y_{i}x_{1}^{T}x_{i} - b) - (f(x_{2}) - sum_{i=1}^{2}alpha_{i}y_{i}x_{2}^{T}x_{i} - b)))
  
  (large = y_{2}((f(x_{1}) - y_{1}) - (f(x_{2}) - y_{2})+ ex_{1}^{T}x_{1} - ex_{1}^{T}x_{2}- sum_{i=1}^{2}alpha_{i}y_{i}x_{1}^{T}x_{i} + sum_{i=1}^{2}alpha_{i}y_{i}x_{2}^{T}x_{i}))
  
  (large = y_{2}((f(x_{1}) - y_{1}) - (f(x_{2}) - y_{2})+ (alpha_{1}y_{1} + alpha_{2}y_{2})x_{1}^{T}x_{1} - (alpha_{1}y_{1} + alpha_{2}y_{2})x_{1}^{T}x_{2})
  
     (large - alpha_{1}y_{1}x_{1}^{T}x_{1} - alpha_{2}y_{2}x_{1}^{T}x_{2} + alpha_{1}y_{1}x_{2}^{T}x_{1} + alpha_{2}y_{2}x_{2}^{T}x_{2}))
  
  (large = y_{2}((f(x_{1}) - y_{1}) - (f(x_{2}) - y_{2})+ alpha_{2}y_{2}(x_{1}^{T}x_{1} + x_{2}^{T}x_{2} - 2x_{1}^{T}x_{2})))
  
  (large = y_{2}((f(x_{1}) - y_{1}) - (f(x_{2}) - y_{2}))+ alpha_{2}(x_{1}^{T}x_{1} + x_{2}^{T}x_{2} - 2x_{1}^{T}x_{2}))

这过程中,代入 (large alpha_{1}y_{1} + alpha_{2}y_{2} = e)(large f(x)) 时用到的 (small alpha) (即等式右边用的)是旧的

可以得到优化后的值为
  
  (large alpha_{2 - new} = alpha_{2 - old} + frac{y_{2}((f(x_{1}) - y_{1}) - (f(x_{2}) - y_{2}))}{x_{1}^{T}x_{1} + x_{2}^{T}x_{2} -2x_{1}^{T}x_{2}})

        (large = alpha_{2 - old} + frac{y_{2}(E_{1} - E_{2})}{x_{1}^{T}x_{1} + x_{2}^{T}x_{2} -2x_{1}^{T}x_{2}})
  
然后计算优化后的 (small alpha_{1})
  
  (large alpha_{1 - new} = alpha_{1 - old} + y_{1}y_{2}(alpha_{2 - old}-alpha_{2 - new}))


得到优化值后,继续取下一对 (small alpha) 值进行优化
  
  
考虑到 (small C geqslant alphageqslant0)(small alpha_{1}y_{1} + alpha_{2}y_{2} = e) 的限制,在优化的过程中,(small alpha) 值是受到限制的
  
(small y_{1})(small y_{2}) 异号,比如 (small y_{1} = 1)(small y_{2} = -1) 时,有 (small alpha_{1} - alpha_{2} = e)
  
  当 (small e > C)(small e < -C)
  
    由于有 (small C geqslant alphageqslant0),该式子无解
  
  当 (small C geqslant e geqslant 0)
  
    (small alpha_{2}) 的取值范围在 (small (0, C-e))
  
  当 (small 0 geqslant e geqslant -C)
  
    (small alpha_{2}) 的取值范围在 (small (-e, C))
  
  可以看到 (small alpha_{2}) 的取值范围是
  
    (small L = max(0,-e),H = min(C,C-e))
  
  即有 (small alpha_{2-new})  的取值范围是
  
    (large max(0, alpha_{2-old}-alpha_{1-old}) leqslant alpha_{2-new} leqslant min(C, C+alpha_{2-old}-alpha_{1-old}))
  
同理可求得 (small y_{1})(small y_{2}) 同号时
  
  (large max(0, alpha_{2-old}+alpha_{1-old}-C) leqslant alpha_{2-new} leqslant min( C, alpha_{2-old}+alpha_{1-old}))
  
  
接下来求 (small b)
  
  由 (large f(x) =sum_{i=1}^{n}alpha_{i}y_{i}(x_{i}^{T}x) + b) 可以反推

  If: (small 0 < a_{1-new} < C) ,通过第一个点计算
  
    (large b_{1-new} = y_{1} - sum_{i=3}^{n}(alpha_{i-old})y_{i}(x_{i}^{T}x_{1}) - (alpha_{1-new})y_{1}(x_{1}^{T}x_{1}) - (alpha_{2-new})y_{2}(x_{2}^{T}x_{1}))
  
    又有 (large E_{1} = sum_{i=3}^{n}(alpha_{i-old})y_{i}(x_{i}^{T}x_{1}) + (alpha_{1-old})y_{1}(x_{1}^{T}x_{1}) + (alpha_{2-old})y_{2}(x_{2}^{T}x_{1}) + b_{old} - y_{1})
  
    (large b_{1-new} = y_{1} - E_{1} + (alpha_{1-old})y_{1}(x_{1}^{T}x_{1}) + (alpha_{2-old})y_{2}(x_{2}^{T}x_{1}) + b_{old} - y_{1})
  
          (large - (alpha_{1-new})y_{1}(x_{1}^{T}x_{1}) - (alpha_{2-new})y_{2}(x_{2}^{T}x_{1}))
  
         (large = - E_{1} - y_{1}(x_{1}^{T}x_{1})(alpha_{1-new}-alpha_{1-old}) - y_{2}(x_{2}^{T}x_{1})(alpha_{2-new}-alpha_{2-old}) + b_{old})
  
  elif: (small 0 < a_{2-new} < C),通过第二个点计算

    (large b_{2-new} = - E_{2} - y_{2}(x_{1}^{T}x_{2})(alpha_{1-new}-alpha_{1-old}) - y_{2}(x_{2}^{T}x_{2})(alpha_{2-new}-alpha_{2-old}) + b_{old})
  
  else :
    (small a_{1-new},a_{2-new}) 或者是 0 或者是 (small C)
    
    (large b_{new} = frac{b_{1-new} + b_{2-new} }{2})

分类预测

前面已经给出了公式,这里再总结一下
  
  (large W_{j} = sum_{i=1}^{n} alpha_{i} y_{i}x_{i}^{j})
  
  (large f(X) = W^{T}X+b)
  
    或者
  
  (large f(x)= sum_{i=1}^{n}alpha_{i}y_{i}(x_{i}^{T}x) + b)
  
    或者
    
  (large f(x) = (a cdot y)^{T} imes(X imes x^{T}) + b)
  
  
  (large alpha_{i} != 0) 对应的 (small x_{i}) 就是支持向量(??)

简化版 SMO 代码

# coding=utf-8
import numpy as np


def selectJrand(i, n):
    """
    在 (0, n) 之间随机选择一个不等于 i 的 值
    """
    j = i
    while j == i:
        j = int(np.random.uniform(0, n))
    return j


def clipAlpha(aj, H, L):
    """
    限制 aj 的值在 L 和 H 之间
    """
    if aj > H:
        aj = H
    if L > aj:
        aj = L
    return aj


def smoSimple(dataMatIn, classLabels, C, toler, maxIter):
    """
    dataMatIn   - 样本数据集,(n, m) 维数组,n 是样本数,m 是特征数
    classLabels - 标签,(1, n) 维数组,取值为 1 或 -1
    C           - 使 C > a > 0,以处理不 100% 线性可分的情况,加上 C 的限制后不会一直增加 a 值,同时允许部分数据错分
    toler       - 能容忍的误差范围
    maxIter     - 迭代次数
    """

    # 样本转为 numpy 矩阵
    dataMatrix = np.mat(dataMatIn)
    labelMat = np.mat(classLabels).transpose()

    # 样本数,特征数
    n, m = np.shape(dataMatrix)

    # 初始化 a 和 b 为 0
    b = 0
    alphas = np.mat(np.zeros((n, 1)))

    # 如果连续 maxIter 迭代 a 值都没改变就跳出
    # 如果 a 值有变化要重新计算迭代次数
    iter = 0
    while iter < maxIter:
        alphaPairsChanged = 0

        # 遍历每个样本
        for i in range(n):
            # f = WX + b
            #   = (a . y)^T * (X * x^T) + b
            fXi = float(np.multiply(alphas, labelMat).T * (dataMatrix * dataMatrix[i, :].T)) + b

            # 计算误差
            Ei = fXi - float(labelMat[i])

            # 误差偏大,看能不能优化 a 值
            if (labelMat[i] * Ei < -toler and alphas[i] < C) or (labelMat[i] * Ei > toler and alphas[i] > 0):
                # 随机选择另一个样本
                j = selectJrand(i, n)

                # 计算误差
                fXj = float(np.multiply(alphas, labelMat).T * (dataMatrix * dataMatrix[j, :].T)) + b
                Ej = fXj - float(labelMat[j])

                # 保存对应的 a 值
                alphaIold = alphas[i].copy()
                alphaJold = alphas[j].copy()

                # 计算优化后的 a2 的取值范围
                if labelMat[i] != labelMat[j]:
                    L = max(0, alphas[j] - alphas[i])
                    H = min(C, C + alphas[j] - alphas[i])
                else:
                    L = max(0, alphas[j] + alphas[i] - C)
                    H = min(C, alphas[j] + alphas[i])

                if L == H:
                    # 无法优化
                    print "L == H"
                    continue

                # a2-new = a2-old - y2(E1-E2)/(2 * x1 * x2^T - x1 * x1^T - x2 * x2^T)
                #        = a2-old - y2(E1-E2)/eta
                eta = 2.0 * dataMatrix[i, :] * dataMatrix[j, :].T 
                    - dataMatrix[i, :] * dataMatrix[i, :].T 
                    - dataMatrix[j, :] * dataMatrix[j, :].T

                if eta >= 0:
                    # 无法优化
                    print "eta>=0"
                    continue

                # 优化 aj
                alphas[j] -= labelMat[j] * (Ei - Ej) / eta

                # 限制 aj
                alphas[j] = clipAlpha(alphas[j], H, L)

                if abs(alphas[j] - alphaJold) < 0.00001:
                    # 没优化
                    print "j not moving enough"
                    continue

                # 优化 ai
                # a1-new = a1-old + y1*y2*(a2-old - a2-new)
                alphas[i] += labelMat[j] * labelMat[i] * (alphaJold - alphas[j])

                # 计算 b 值
                b1 = b - Ei 
                     - labelMat[i] * (alphas[i] - alphaIold) * dataMatrix[i, :] * dataMatrix[i, :].T 
                     - labelMat[j] * (alphas[j] - alphaJold) * dataMatrix[i, :] * dataMatrix[j, :].T

                b2 = b - Ej 
                     - labelMat[i] * (alphas[i] - alphaIold) * dataMatrix[i, :] * dataMatrix[j, :].T 
                     - labelMat[j] * (alphas[j] - alphaJold) * dataMatrix[j, :] * dataMatrix[j, :].T

                if (0 < alphas[i]) and (C > alphas[i]):
                    b = b1
                elif (0 < alphas[j]) and (C > alphas[j]):
                    b = b2
                else:
                    b = (b1 + b2) / 2.0

                # 优化次数
                alphaPairsChanged += 1

                print "iter: %d i:%d, pairs changed %d" % (iter, i, alphaPairsChanged)

        if alphaPairsChanged == 0:
            # 到这里说明遍历了整个数据集都没能优化
            # 但由于第二个样本是随机的,所以可以继续迭代,可能还是可以优化
            # 其实这里的代码还可以优化,如果每次的第一个样本就决定优化不了,那到这里就没必要继续迭代了
            # 而且这里不应该完全依赖迭代次数达成,应该有其他条件比如误差小到一定程度就可以停止计算
            iter += 1
        else:
            # 有优化,重新计算迭代次数
            iter = 0

        print "iteration number: %d" % iter

    return b, alphas


def calcWs(alphas, dataArr, classLabels):
    """
    计算 W 系数
    """
    X = np.mat(dataArr)
    labelMat = np.mat(classLabels).transpose()
    m, n = np.shape(X)
    w = np.zeros((n, 1))

    for i in range(m):
        w += np.multiply(alphas[i]*labelMat[i], X[i, :].T)

    return w

完整版 SMO

  • 简化版 SMO 运算速度慢
  • 接收参数一样
  • 对 alphas 的更改运算一样
  • 对第二个 alphas 的选择不一样,简化版是随机的,完整版复杂一点
  • 外循环最多循环一次,不会出现重置 iter 的情况,而且两次迭代如果 alphas 都没有优化就会跳出外循环

核函数

  • 有的数据是非线性可分的,比如某些二维数据无法用直线划分而是用圆划分
  • 可以将数据从原始空间映射到更高维度的特征空间(比如讲二维数据映射到三维空间),使得数据在这个新的特征空间内线性可分
  • 对于有限维空间,一定存在这样一个高维空间使得数据线性可分
  • 用于映射的函数就被称为核函数,很多其他的机器学习算法也都用到核函数
  • 假设映射后的特征向量为 (small Phi(x)),则划分超平面表示为(small f(x) = w^{T}Phi(x) + b)
  • 参数的推导过程和 SMO 一样

高斯核函数

径向基函数是 SVM 中常用的一个核函数。
径向基函数是一个采用向量作为自变量的函数,能够基于向量距离运算输出一个标量。
径向基函数的高斯版本的高斯版本,函数为
  
  (large k(x_{i}, x_{j}) = exp(frac{-||x_{i}-x_{j}||^{2}}{2sigma^{2}}))
  
其中
  (large exp()) 表示自然数 (small e) 的指数
  (large sigma)是函数值跌落到 0 的速度参数
  
该函数将数据映射到更高维的空间
具体来说是映射到一个无穷维的空间(不理解,貌似是按泰勒展开的话有无穷维)
  
优化的目标函数由
  
  (large L = sum_{i=1}^{n}alpha_{i} - frac{1}{2}sum_{i=1}^{n}sum_{j=1}^{n}alpha_{i}alpha_{j} y_{i}y_{j}x_{i}^{T}x_{j})
  
改写为
  
  (large L = sum_{i=1}^{n}alpha_{i} -frac{1}{2}sum_{i=1}^{n}sum_{j=1}^{n}alpha_{i}alpha_{j} y_{i}y_{j}k(x_{i}, x_{j}))
  
训练过程和前面的 SMO 一样,推导过程中的 (small x_{i}^{T}x_{j}) 改为 (small k(x_{i}, x_{j})) 就可以
  
高斯核函数计算

def kernel(X, omiga):
    """
    用高斯核函数对样本数据 X 转换

    X 是二维矩阵

    返回矩阵 K

    K(i, j) = exp(-(||Xi - Xj||**2)/(2 * (omiga**2)))
    """

    m, n = np.shape(X)

    K = np.mat(np.zeros((m, m)))

    for i in range(m):
        for j in range(m):
            # 计算 ||Xi - Xj||**2
            deltaRow = X[j, :] - X[i, :]
            d = deltaRow * deltaRow.T

            K[i, j] = np.exp(d / (-2 * omiga ** 2))

    return K
 
 


原文地址:https://www.cnblogs.com/moonlight-lin/p/12375525.html