【书签】stacking、blending

读懂stacking:模型融合Stacking详解/Stacking与Blending的区别

https://blog.csdn.net/u014114990/article/details/50819948

https://mlwave.com/kaggle-ensembling-guide/

The basic idea behind stacked generalization is to use a pool of base classifiers, then using another classifier to combine their predictions, with the aim of reducing the generalization error.

Let’s say you want to do 2-fold stacking:

  • Split the train set in 2 parts: train_a and train_b
  • Fit a first-stage model on train_a and create predictions for train_b
  • Fit the same model on train_b and create predictions for train_a
  • Finally fit the model on the entire train set and create predictions for the test set.
  • Now train a second-stage stacker model on the probabilities from the first-stage model(s).

A stacker model gets more information on the problem space by using the first-stage predictions as features, than if it was trained in isolation.

It is usually desirable that the level 0 generalizers are of all “types”, and not just simple variations of one another (e.g., we want surface-fitters, Turing-machine builders, statistical extrapolators, etc., etc.). In this way all possible ways of examining the learning set and trying to extrapolate from it are being exploited. This is part of what is meant by saying that the level 0 generalizers should “span the space”.

[…] stacked generalization is a means of non-linearly combining generalizers to make a new generalizer, to try to optimally integrate what each of the original generalizers has to say about the learning set. The more each generalizer has to say (which isn’t duplicated in what the other generalizer’s have to say), the better the resultant stacked generalization. Wolpert (1992) Stacked Generalization

从以上文字我们可以知道,stacking思想的精髓是,从训练数据的多个不同侧面对数据的分布情况进行了解,从而的全面地学习到数据的分布规律,然后用上一层学习器学习到的多个侧面知识,作为下一层学习的对象进一步学习,其实是为了汇总学习到的信息,从而学习得更好。

分分钟带你杀入Kaggle Top 1%

数据比赛大杀器----模型融合(stacking&blending)

集成学习总结 & Stacking方法详解

 

stacking涉及到类似交叉验证的步骤,关于交叉验证:

1.理解交叉验证(cross validation,cv):https://blog.csdn.net/liuweiyuxiang/article/details/78489867
交叉验证实际上就是对同一批数据进行多次利用,但每次这批数据的训练集、测试集的划分方式不同,算法在各个数据集划分上都跑一遍,各个结果取平均值来衡量该算法的准确性。
交叉验证的本质是多次实验,然后取平均数作为最终结果。

交叉验证的用途:
1.1可用于算法调参:同一个算法,如KNN算法,为了确定参数K的最佳取值,可以在K=3时,进行交叉验证,得出K=3时的平均准确度,
然后K=4时,交叉验证,得出K=4时的平均准确度,如此类推。
最后比较K取各个值时的平均准确度来确定K取哪个值时准确度最好。

1.2 交叉验证用于比较算法(模型选择):在各个算法的参数已经确定的情况下,(例如KNN的k确定为5,逻辑回归算法的参数也已经确定),让KNN和逻辑回归都在同一个数据集上
做一次交叉验证,得出各自的准确度,即可决定KNN和LR哪个好。

验证集(validation set)与训练集相似,但它不是必须的,做交叉验证的时候才会用到验证集。一般情况下,只需要把整个数据集分为training set(假设有8000行数据)、testing set(假设有2000行),training set用于算法训练,是的算法学习到摸清楚数据集的整体分布规律,然后用训练后的算法对testing set做预测,计算准确率。但这种只分为training set、testing set的做法,容易导致训练得到算法的方差(variance)较大,算法表现不稳定,于是可把testing set保持不动,把原来的training set进一步细分,例如划分成为5份【sub set 1、sub set 2、sub set 3,...sub set 5】每份1600行数据,其中4份作为training set,另一份(sub set 1)作为验证集(validation set),然后让算法在那4份的training set上进行训练学习,然后在validation set上预测结果,得出这种划分下的算法准确性。还可以将sub set 2作为验证集,sub set 1、3、4、5合在一起作为training set,训练之后sub set2上进行预测,计算准确率。如此进行5次,最后把各次的准确率求均值,作为算法的准确性,这个过程称为交叉验证。验证之后,还可以进一步在testing set(2000行)上进行预测,即作为算法准确性。由此我们可以看出validation set作用与原来的testing set其实是一样的。

 训练集、验证集、测试集以及交验验证的理解

2.网格搜索:就是调参时穷举搜索,例如参数a有两个取值可选,b有三种取值可选,则一共有6中参数取值组合,逐一试过去,最终找到最佳的参数组合。
https://www.cnblogs.com/ysugyl/p/8711205.html
https://blog.csdn.net/sinat_32547403/article/details/73008127

原文地址:https://www.cnblogs.com/aaronhoo/p/9878678.html