转载:Why machine learning algorithms are hard to tune and how to fix it

Key point

  • We often optimize a Linear combinations of losses and hope to simultaneously reduce both (L_1) and (L_0) losses, but that this linear combination is actually precarious and treacherous.
  • Authours showed that when the pareto curve is concave, only one of these two losses was considered and this linear combination was valid in convex pareto curve.
  • In fact, We can't figure out the property of the pareto curve.
  • We can reformulate the linear combination losses as a constraint optimization problem, e.g., restricting (L_0leq epsilon)
  • By doing so, authors gave three possible solutions.

References

原文地址:https://www.cnblogs.com/DemonHunter/p/14860756.html