pytorch基础教程1

0.迅速入门:根据上一个博客先安装好,然后终端python进入,import torch

************************************************************

1.pytorch数据结构

1)初始化方式:

eg1: 列表初始化:

data = [-1, -2, 1, 2] tensor = torch.FloatTensor(data) # 转换成32位浮点 tensor

data = [[1,2], [3,4]] tensor = torch.FloatTensor(data) # 转换成32位浮点 tensor

data = torch.FloatTensor([1,2,3])

eg2: numpy 到torch

import torch

import numpy as np

np_data = np.arange(6).reshape((2, 3))

torch_data = torch.from_numpy(np_data)

eg3: 直接用自带函数初始化

import torch

a=torch.rand(3,4)

b=torch.eye(3,4)

c=torch.ones(3,4)

d=torch.zeros(3,4)

x = torch.linspace(1, 10, 10)

eg4: 分配和其他相同size的内存,然后fill_

y = dist_an.data.new()
y.resize_as_(dist_an.data)
y.fill_(1)

eg5: 没有初始化,需要对其进行赋值操作

a = torch.Tensor(2,4)
c = torch.IntTensor(2,3);print(c) -- 也可以指定类型

2)数据结构类型转换

eg1: cpu,gpu之间数据转换

d=b.cuda()

e=d.cpu()

 net = Net().cuda()

eg2:numpy, torch转换

c=b.numpy()

b = torch.from_numpy(a)

eg3: torch转可导的Variable

 y = Variable(y)

eg4: ByteTensor, FloatTensor转换

dtype =  tensor.FloatTensor

# dtype =  tensor.cuda.FloatTensor

x=torch.rand(3,4).type(dtype)

 eg5: torch的Variable转tensor

y = y.data

************************************************************

2.哪些基本运算

矩阵乘法:torch.mm(tensor, tensor)

均值:torch.mean(tensor)

三角函数:np.sin(data)

绝对值:torch.abs(tensor)

************************************************************

3.哪些包,哪些函数

 1)官网中文api

2)  官网英文

************************************************************

4. 反向传播例子:

a=Variable(torch.FloatTensor(torch.randn(2,2)),requires_grad=True)

b=a+2

c=b*b*3

out=c.mean()

out.backward()

a.grad()

************************************************************

5. 网络定义、训练、保存、恢复、打印网络(莫烦)

见下一个博客吧。参考博客1博客2博客3莫烦自定义function,module(重要), 自定义examples, (重要)拓新module,一个自定义的解释自定义解释

1)无参数的:用function足矣

from torch.autograd import Function

 1 import torch
 2 from torch.autograd import Function
 3 
 4 class ReLUF(Function):
 5     def forward(self, input):
 6         self.save_for_backward(input)
 7 
 8         output = input.clamp(min=0)
 9         return output
10 
11     def backward(self, output_grad):
12         input = self.to_save[0]
13 
14         input_grad = output_grad.clone()
15         input_grad[input < 0] = 0
16         return input_grad
17 
18 ## Test
19 if __name__ == "__main__":
20       from torch.autograd import Variable
21 
22       torch.manual_seed(1111)  
23       a = torch.randn(2, 3)
24 
25       va = Variable(a, requires_grad=True)
26       vb = ReLUF()(va)
27       print va.data, vb.data
28 
29       vb.backward(torch.ones(va.size()))
30       print vb.grad.data, va.grad.data

2)有参数,先用function,然后用module+参数打包

step 1:

 1 import torch
 2 from torch.autograd import Function
 3 
 4 class LinearF(Function):
 5 
 6      def forward(self, input, weight, bias=None):
 7          self.save_for_backward(input, weight, bias)
 8 
 9          output = torch.mm(input, weight.t())
10          if bias is not None:
11              output += bias.unsqueeze(0).expand_as(output)
12 
13          return output
14 
15      def backward(self, grad_output):
16          input, weight, bias = self.saved_tensors
17 
18          grad_input = grad_weight = grad_bias = None
19          if self.needs_input_grad[0]:
20              grad_input = torch.mm(grad_output, weight)
21          if self.needs_input_grad[1]:
22              grad_weight = torch.mm(grad_output.t(), input)
23          if bias is not None and self.needs_input_grad[2]:
24              grad_bias = grad_output.sum(0).squeeze(0)
25 
26          if bias is not None:
27              return grad_input, grad_weight, grad_bias
28          else:
29              return grad_input, grad_weight

step 2:

 1 import torch
 2 import torch.nn as nn
 3 
 4 class Linear(nn.Module):
 5 
 6     def __init__(self, in_features, out_features, bias=True):
 7          super(Linear, self).__init__()
 8          self.in_features = in_features
 9          self.out_features = out_features
10          self.weight = nn.Parameter(torch.Tensor(out_features, in_features))
11          if bias:
12              self.bias = nn.Parameter(torch.Tensor(out_features))
13          else:
14             self.register_parameter('bias', None)
15 
16     def forward(self, input):
17          return LinearF()(input, self.weight, self.bias)

************************************************************

6.哪些教程

1)官网

2)github

3)莫烦的视频

4)官网中文api

5)  官网英文

************************************************************

广告:

np_data = np.arange(6).reshape((2, 3))

start_time = time.time()

end_time = time.time() print("Spend time:", end_time - start_time)

原文地址:https://www.cnblogs.com/Wanggcong/p/7719975.html