Few Shot Learning

**Few Shot Learning New approach** ---- In fact, a novel parameter updating method, replacing the backpropagation algorithm. Let's begin from the math, $x$ be the input, $yleftarrow f(x| heta)$ is the inference model formulation, and $y$ is the ouput of AI system with parameter of $ heta$. $R$ be the reward, given $Rleftarrow psi(y,hat{y})$, denotes that $R$ is derivated from $y$ and $hat{y}$, normally we see this as a reward function, which is fixed. $left$ defined as the **Reward-Environment-Actor** triplet. This is similar to which described in reinforcement learning. $Delta{ heta}leftarrow g(left|eta)$ is the gradient generator. $g(cdot)$ becomes the core issue of the training of ANN, since I believe it's the **GRADIENTS** that **SHAPE** our model. A more flexiable and delicated gradient generation algorithm is deserved of study. The parameter updating equation remains the same: $ heta_{t+1}leftarrow heta_t + Delta{ heta_t} $. Thus we have derived a novel gradient generator with parameter $eta$, the meta settings for a AI learning system, which makes it a **META LEARNING** problem. For a meta learning task, it is necessary to consider about the generability of model, which refers to the robustness of the performance transferring between datasets or data splits. Now, we obtain the overall optimization objective of a transferring between data splits $mathbb{D}_1$ and $mathbb{D}_2$ as follow, $$ argmax_{eta}quad psi(f(x_{t+1}| heta_t + g(left|eta), hat{y}_{t+1}), forall x_tinmathbb{D}_1, x_{t+1}inmathbb{D}_2 $$
原文地址:https://www.cnblogs.com/thisisajoke/p/11287111.html