【deep learning学习笔记】注释yusugomori的LR代码 --- LogisticRegression.h

继续看yusugomori的代码,看逻辑回归。在DBN(Deep Blief Network)中,下面几层是RBM,最上层就是LR了。关于回归、二类回归、以及逻辑回归,资料就是前面转的几篇。套路就是设定目标函数(softmax损失函数),对参数求偏导数,得出权重更新公式等。

LogisticRegression.h注释如下:

class LogisticRegression 
{
public:
  	int N;  		// number of input samples
  	int n_in;		// number of input nodes
  	int n_out;		// number of output nodes
  	double **W;		// weights connecting the input nodes and the output nodes
  	double *b;		// bias of the output nodes
  	// allocate memory and initialize the parameters
  	LogisticRegression(
  			int, 	// N
  			int, 	// n_in
  			int		// n_out
		  	);
  	~LogisticRegression();
  
public:
	// train the logistic regression model, update the value of W and b
  	void train (
	  		int*, 	// the input from input nodes in training set
	  		int*, 	// the output from output nodes in training set
	  		double	// the learning rate
		  );
 	// calculate the softmax for a input vector
 	// dSoftMax = exp(d_i - Max) / sum_i( exp(d_i - Max) )
  	void softmax (
	  		double*	// the calculated softmax probabiltiy -- input & output
			  );
	// do prediction by calculating the softmax probability from input
  	void predict (
	  		int*, 	// the input from input nodes in testing set
	  		double*	// the calculated softmax probability
			  );
};


顺便提一句。从前RBM的那个注释,是在家用VS2008写的;现在这个,用CFree5.0,轻量级、编辑器操作贴心,赞一下!


原文地址:https://www.cnblogs.com/dyllove98/p/3194133.html