stochastic noise and deterministic noise

在机器学习中,导致overfitting的原因之一是noise,这个noise可以分为两种,即stochastic noise,随机噪声来自数据产生过程,比如测量误差等,和deterministic noise,确定性噪声来自added complexity,即model too complex。这两种类型的造成来源不同,但是对于学习的影响是相似的,large noise总会导致overfitting。


This is a very subtle question!

The most important thing to realize is that in learning, H is fixed and D is given, and so can be assumed fixed. Now we can ask, what is going on in this learning scenario. Here is what we can say:

i) If there is stochastic noise with ‘magnitude’ σ2, then you are in trouble.

ii) If there deterministic noise then you are in trouble.

The stochastic noise can be viewed as one part of the data generation process (eg. measurement errors). The deterministic noise can similarly be viewed as another part of the data generation process, namely f. The deterministic and stochastic noise are fixed. In your analogy, you can increase the stochastic noise by increasing the noise variance and you get into deeper trouble. Similarly, you can increase the deterministic noise by making f more complex and you will get into deeper trouble.

I just need to tell you what ‘trouble’ means. Well, we actually use another word instead of ‘trouble’ - overfitting.

This means you may be likely to make an inferior choice over the superior choice because the inferior choice has lower in-sample error. Doing stuff that looks good in-sample that leads to disasters out-of-sample is the essence of overfitting. An example of this is trying to choose the regularization parameter. If you pick a lower regularization parameter, then you have lower in-sample error, but it leads to higher out-of-sample error - you picked the λ with lowerEinbut it gave higher Eout. We call that overfitting. Underfitting is just the name we give to the opposite process in the context of picking the regularization parameter. Once the regularization parameter gets too high, as you pick a higher λ you get both higher Einand higher Eout. It also turns out that this means you over regularized and obtained an over-simplistic g - i.e. you ‘underfitted’, you didn’t fit the data enough. The underfitting and overfitting are just terms. The substance of what is going on under the hood is how the deterministic and stochastic noise are affecting what you should and should not do in-sample.
Now let’s get back to the subtle part of your question. There is actually another way to decrease the deterministic noise - increase the complexity of H (the other way is to decrease the complexity of f which we discussed above). Now is where the difference with stochastic noise pops up. With stochastic noise, it either goes up or down; if down, then things get better. With deterministic noise, if you just tell me that it went down, I need to ask you how. Did your target function get simpler - if yes, then great, it is just as if the stochastic noise went down. If it is that your H got more complicated, then things get interesting. To understand what is going on, the Bias Variance decomposition helps (bottom of page 125 in the textbook).

Eout=σ2+bias+var

σ2is the direct impact of the stochastic noise. bias is the direct impact of the deterministic noise. The var term is interesting and is the indirect impact of the noise, through H. The var term is mostly controlled by the size of H in relation to the number of data points. So getting back to the point, if you make H more complex, you will decrease the det. noise (bias) but you will increase the var (its indirect impact). Usually the latter dominates (overfitting, not because of the direct impact of the noise, but because of its indirect impact) … unless you are in the underfitting regime when the former dominates.

上面一段主要摘自《learning from data》一书,主要说明的内容是overfitting的含义以及noise对于overfitting的效用。
下面是对overfitting的很好的总结:
VC维大=>模型复杂度高=>error in sample 小=>模型不够平滑=>generalization能力弱=>error out of sample大=>overfitting=>模型并没有卵用。

总的来说,deterministic noise是由于你选择的H中的最好的hypothesis h对于不在H中的function f进行估计时的差。在给定x后,这个deterministic noise就确定了。
deterministic function可用来生成伪随机数(pseudo-random generator)。
详细的论述可以参看《learning from data》


2015-8-27
艺少

原文地址:https://www.cnblogs.com/huty/p/8519205.html