Deep RL Bootcamp Lecture 8 Derivative Free Methods

 

you wouldn't try to explore any problem structure in DFO

 

 

low dimension policy

30 degrees of freedom

120 paramaters to tune 

 

 

keep the positive results in a smooth way.

 

How does evolutionary method work well in high dimensional setting?

If you normalize the data well, evolutionary method could work well in MOJOCO, with random search. 

Could always only get stuck at local minima.

humanoid 200k parameters need to be tuned, and it's learnt by evolutionary method.

The four videos are actually four different local minima, and once you get stuck on it, it can never get out of it.

evolutionary method is roughly 10 times worse than action space policy gradient.

evolutionary method is hard to tune because previously people didn't get it to work with deep net

 

 

 

 

 

原文地址:https://www.cnblogs.com/ecoflex/p/8979721.html