Kernel的意义

在第7章最后一段讲到Kernel,Kernel就是用向量表示元素的和的乘积。

Back in our discussion of linear regression, we had a problem in which the input x was the living area of a house, and we considered performing regression using the features x, x2 and x3 (say) to obtain a cubic function. To distinguish between these two sets of variables, we’ll call the “original” input value the input attributes of a problem (in this case, x, the living area). When that is mapped to some new set of quantities that are then passed to the learning algorithm, we’ll call those new quantities the input features. (Unfortunately, different authors use different terms to describe these two things, but we’ll try to use this terminology consistently in these notes.) We will also let φ denote the feature mapping, which maps from the attributes to the features. For instance, in our example, we had

回到我们关于线性回归的讨论,我们有一个关于用x表示我们房子的居住面积问题,而我们考虑使用特征x, x2和x3来进行回归,得到一个立方函数。为了区分这两组变量,我们将会把“原始输入值”称为“一个问题的输入属性”(在该例中,x指的是居住面积)。当将它映射到一组新的数量集时,然后将传递给学习算法,我们将把这些新的数量集称为输入特征。(不幸地是,不同的作者将使用不同的术语描述这两种事情,但是我们将尽量持续地在本教程集中使用该术语。)我们还将使用φ表示特征映射,他将属性(x)映射到特征(x,x2,x3)。例如,

Rather than applying SVMs using the original input attributes x, we may instead want to learn using some features φ(x). To do so, we simply need to go over our previous algorithm, and replace x everywhere in it with φ(x).

而非使用原始输入属性x应用SVM支持向量机,我们可能想使用一些特征φ(x)学习。要这么做,我们只需简单地浏览一下我们之前的算法,用φ(x)替代x。

Since the algorithm can be written entirely in terms of the inner products<x,z>, this means that we would replace all those inner products with <φ(x), φ(z)>. Specificically, given a feature mapping φ, we define the corresponding Kernel to be

既然算法可以完全以内积<x,z>来写,这意味着我们可以以<φ(x), φ(z)>替代这些内积。特别地,给定特征映射φ的话,我们定义相应的Kernel核为:

K(x, z) = φ(x)Tφ(z).

Then, everywhere we previously had <x, z> in our algorithm, we could simply replace it with K(x, z), and our algorithm would now be learning using the features φ.

然后,在之前我们的算法中每个有<x,z>的地方,我们可以简单地用K(x,z)替代,从而我们的算法如今可以使用特征φ来进行学习了。

Now, given φ, we could easily compute K(x, z) by finding φ(x) and φ(z) and taking their inner product. But what’s more interesting is that often, K(x, z) may be very inexpensive to calculate, even though φ(x) itself may be very expensive to calculate (perhaps because it is an extremely high dimensional vector). In such settings, by using in our algorithm an efficient way to calculate K(x, z), we can get SVMs to learn in the high dimensional feature space space given by φ, but without ever having to explicitly find or represent vectors φ(x).

现在,给定φ的话,我们可以轻松地计算K(x,z),只要找到φ(x)和φ(z),然后计算内积即可。但是更有趣的是,通常K(x,z)可能计算是不昂贵的,尽管φ(x)自身可能计算很昂贵(可能因为它是一个十分高维度的向量)。在这种设定下,在我们的算法中使用一种有效的方法来计算K(x,z),我们可以让SVM学习在φ给定的高维度特征空间空间,但是不要必须精确找到或者表达向量φ(x)。

Lets see an example. Suppose x, z ∈ Rn , and consider

让我看一个例子。假设x,z∈Rn, 并且考虑

K(x, z) = (xTz) 2

We can also write this as

我们还可以把它写为

Thus, we see that K(x, z) = φ(x)Tφ(z), where the feature mapping φ is given (shown here for the case of n = 3) by

因此,我们看到K(x, z) = φ(x)Tφ(z),其中特征映射φ由以下给出(该例中n=3)

 

Note that whereas calculating the high-dimensional φ(x) requires O(n2) time, finding K(x, z) takes only O(n) time—linear in the dimension of the input attributes.

注意到计算高维度的φ(x)需要O(n2)次,而K(x,z)只需要O(n)次——输入属性的维度是线性的。

For a related kernel, also consider

对于一个相关的核,还需要考虑

(Check this yourself.) This corresponds to the feature mapping (again shown for n = 3)

(自己检验)它相关的特征映射(还是n=3)

and the parameter c controls the relative weighting between the xi (first order) and the xixj (second order) terms.

参数c控制了xi(一阶)和xixj(二阶)之间的相对权重。

 

原文地址:https://www.cnblogs.com/2008nmj/p/8456580.html