<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      PyTorch學習(2)

      PyTorch學習(2)

      PyTorch學習(2)

        1 Numpy與Torch的區別與聯系

         1.1 numpy的array與Torch的tensor轉換

         1.2 Torch中的variable

        2 激勵函數(Activation Function)

        3 Regression回歸(關系擬合回歸)

        4 Classification(分類)

        5 Torch網絡

         5.1 快速搭建torch網絡

         5.2 保存和提取網絡與參數

         5.3 批處理

         5.3 優化器optimizer加速神經網絡

        6 神經網絡分類

      這里是根據莫凡pytorch學習的,與pytorch學習(1)可能有所重疊,但是大部分不太一樣,可以結合著一起看。

      1 Numpy與Torch的區別與聯系

      1.1 numpy的array與Torch的tensor轉換

      1)數據類型轉換

      注:torch只處理二維數據

      import torch
      import numpy as np
      ?
      np_data = np.arange(6).reshape((2, 3))
      torch_data = torch.from_numpy(np_data)
      tensor2array = torch_data.numpy()
      ?
      print('\nnp_data', np_data,
           '\ntorch_data', torch_data,
           '\ntensor2array', tensor2array, )
      ?
      #結果顯示
      np_data [[0 1 2]
      [3 4 5]]
      torch_data tensor([[0, 1, 2],
            [3, 4, 5]], dtype=torch.int32)
      tensor2array [[0 1 2]
      [3 4 5]]

      2)矩陣乘法

      data = [[1, 2], [2, 3]]
      tensor = torch.FloatTensor(data)
      print('\nnumpy', np.matmul(data, data),
           '\ntorch', torch.matmul(tensor, tensor))
      ?
      #結果顯示
      numpy [[ 5  8]
      [ 8 13]]
      torch tensor([[ 5.,  8.],
            [ 8., 13.]])
      注意的是torch中默認的tensor是float形式的

      1.2 Torch中的variable

      import torch
      from torch.autograd import Variable
      ?
      tensor = torch.FloatTensor([[1, 2], [3, 4]])
      variable = Variable(tensor, requires_grad=True)
      ?
      t_out = torch.mean(tensor*tensor)
      v_out = torch.mean(variable*variable)
      print('tensor', tensor)
      print('variable', variable)
      print('t_out', t_out)
      print('v_out', v_out)
      ?
      v_out.backward()  # 反向傳播
      print('grad', variable.grad)  # variable的梯度
      print(variable.data.numpy())
      ?
      #結果顯示
      tensor tensor([[1., 2.],
            [3., 4.]])
      variable tensor([[1., 2.],
            [3., 4.]], requires_grad=True)
      t_out tensor(7.5000)
      v_out tensor(7.5000, grad_fn=<MeanBackward0>)
      grad tensor([[0.5000, 1.0000],
            [1.5000, 2.0000]])
      [[1. 2.]
      [3. 4.]]

      2 激勵函數(Activation Function)

      對于多層神經網絡,激勵函數的選擇有一定竅門

      推薦網絡與激活函數的對應:

      • CNN-relu

      • RNN-relu/tanh

      有三種常用激活函數:(這里說的是線圖)

      relu、sigmoid、tanh

      import torch
      from torch.autograd import Variable
      import matplotlib.pyplot as plt
      ?
      x = torch.linspace(-5, 5, 200)  # 從-5~5分成200段
      x = Variable(x)
      x_np = x.data.numpy()
      ?
      y_relu = torch.relu(x).data.numpy()
      y_sigmoid = torch.sigmoid(x).data.numpy()
      y_tanh = torch.tanh(x).data.numpy()
      ?
      plt.figure(1, figsize=(8, 6))
      plt.subplot(311)
      plt.plot(x_np, y_relu, c='r', label='relu')
      plt.ylim((-1, 5))
      plt.legend(loc='best')
      ?
      plt.subplot(312)
      plt.plot(x_np, y_sigmoid, c='g', label='sigmoid')
      plt.ylim((-0.2, 1.5))
      plt.legend(loc='best')
      ?
      plt.subplot(313)
      plt.plot(x_np, y_tanh, c='b', label='tanh')
      plt.ylim((-1.2, 1.5))
      plt.legend(loc='best')
      plt.show()
      ?
      #結果顯示

      3 Regression回歸(關系擬合回歸)

      一般分為兩種:

      • 回歸問題:一堆數據出一條線

      • 分類問題:一堆數據進行分類

      我們講的是回歸問題:

      import torch
      from torch.autograd import Variable
      import matplotlib.pyplot as plt
      ?
      x = torch.unsqueeze(torch.linspace(-1, 1, 100), dim=1)  # 一維變二維
      y = x.pow(2) + 0.2*torch.rand(x.size())
      ?
      x, y = Variable(x), Variable(y)
      ?
      # plt.scatter(x.data.numpy(), y.data.numpy())
      # plt.show()
      ?
      # 搭建網絡
      class Net(torch.nn.Module):
         def __init__(self, n_features, n_hidden , n_output):
             super(Net, self).__init__()
             # 以上為固定的初始化
             self.hidden = torch.nn.Linear(n_features, n_hidden)
             self.predict = torch.nn.Linear(n_hidden, n_output)
      ?
         def forward(self, x):
             x = torch.relu(self.hidden(x))
             x = self.predict(x)
             return x
      ?
      net = Net(1, 10, 1)  # 1個輸入點,10個隱藏層的節點,1個輸出
      print(net)
      ?
      plt.ion()  # 可視化
      plt.show()
      ?
      optimizer = torch.optim.SGD(net.parameters(), lr=0.5)
      loss_function = torch.nn.MSELoss()  # 回歸問題用均方誤差,分類問題用其他的誤差損失函數
      ?
      for t in range(100):
         out = net(x)
         loss = loss_function(out, y)  # 預測值在前真實值在后
         optimizer.zero_grad()
         loss.backward()
         optimizer.step()
         if t % 5 == 0:
             plt.cla()
             plt.scatter(x.data.numpy(), y.data.numpy())
             plt.plot(x.data.numpy(), out.data.numpy(), 'r-', lw=5)
             plt.text(0.5, 0, 'Loss=%.4f' % loss.item(), fontdict={'size': 20, 'color': 'red'})
             plt.pause(0.1)
      ?
      plt.ioff()
      plt.show()
      ?
      #結果顯示
      Net(
      (hidden): Linear(in_features=1, out_features=10, bias=True)
      (predict): Linear(in_features=10, out_features=1, bias=True)
      )

      最終輸出的結果圖:

       

      4 Classification(分類)

      import torch
      from torch.autograd import Variable
      import matplotlib.pyplot as plt
      ?
      n_data = torch.ones(100, 2)
      x0 = torch.normal(2*n_data, 1)
      y0 = torch.zeros(100)
      x1 = torch.normal(-2*n_data, 1)
      y1 = torch.ones(100)
      x = torch.cat((x0, x1), 0).type(torch.FloatTensor)
      y = torch.cat((y0, y1), ).type(torch.LongTensor)
      ?
      x, y = Variable(x), Variable(y)
      ?
      # plt.scatter(x.data.numpy(), y.data.numpy())
      # plt.show()
      ?
      # 搭建網絡
      class Net(torch.nn.Module):
         def __init__(self, n_features, n_hidden , n_output):
             super(Net, self).__init__()
             # 以上為固定的初始化
             self.hidden = torch.nn.Linear(n_features, n_hidden)
             self.predict = torch.nn.Linear(n_hidden, n_output)
      ?
         def forward(self, x):
             x = torch.relu(self.hidden(x))
             x = self.predict(x)
             return x
      ?
      net = Net(2, 10, 2)  # 2個輸入點,10個隱藏層的節點,2個輸出
      print(net)
      ?
      plt.ion()  # 可視化
      plt.show()
      ?
      optimizer = torch.optim.SGD(net.parameters(), lr=0.2)
      loss_function = torch.nn.CrossEntropyLoss()
      ?
      for t in range(10):  # 訓練的步數
         out = net(x)
         loss = loss_function(out, y)  # 預測值在前真實值在后
         optimizer.zero_grad()
         loss.backward()
         optimizer.step()
         if t % 2 == 0:
             plt.cla()
             out = torch.softmax(out, 1)
             prediction = torch.max(out, 1)[1]  # 如果索引為1則為最大值所在位置,如果為0,則為最大值本身
             pred_y = prediction.data.numpy().squeeze()
             target_y = y.data.numpy()
             plt.scatter(x.data.numpy()[:, 0], x.data.numpy()[:, 1], c=pred_y, s=100)
             accuracy = sum(pred_y == target_y) / 200
             plt.text(1.5, -4, 'Accuracy=%.4f' % accuracy, fontdict={'size': 20, 'color': 'red'})
             plt.pause(0.1)
      ?
      plt.ioff()
      plt.show()
      ?
      #結果顯示

      5 Torch網絡

      5.1 快速搭建torch網絡

      # 搭建網絡
      class Net(torch.nn.Module):
         def __init__(self, n_features, n_hidden , n_output):
             super(Net, self).__init__()
             # 以上為固定的初始化
             self.hidden = torch.nn.Linear(n_features, n_hidden)
             self.predict = torch.nn.Linear(n_hidden, n_output)
      ?
         def forward(self, x):
             x = torch.relu(self.hidden(x))
             x = self.predict(x)
             return x
      ?
      net1 = Net(2, 10, 2)  # 2個輸入點,10個隱藏層的節點,2個輸出
      print(net1)
      ?
      net2 = torch.nn.Sequential(
         torch.nn.Linear(2, 10),
         torch.nn.ReLU(),
         torch.nn.Linear(10, 2),
      )
      print(net2)

      這里的net1與net2其實是一樣的,其中多數用第二種方式進行模型搭建,net2與tensorflow中的搭建方式一樣。

      5.2 保存和提取網絡與參數

      import torch
      from torch.autograd import Variable
      import matplotlib.pyplot as plt
      ?
      torch.manual_seed(1)
      ?
      x = torch.unsqueeze(torch.linspace(-1, 1, 100), dim=1)  # 一維變二維
      y = x.pow(2) + 0.2*torch.rand(x.size())
      ?
      x, y = Variable(x, requires_grad=False), Variable(y, requires_grad=False)  # 當requires_grade為False時,不用求梯度
      ?
      def save():
         net1 = torch.nn.Sequential(
             torch.nn.Linear(1, 10),
             torch.nn.ReLU(),
             torch.nn.Linear(10, 1),
        )
         optimizer = torch.optim.SGD(net1.parameters(), lr=0.05)
         loss_function = torch.nn.MSELoss()
      ?
         for t in range(1000):  # 訓練的步數
             prediction = net1(x)
             loss = loss_function(prediction, y)  # 預測值在前真實值在后
             optimizer.zero_grad()
             loss.backward()
             optimizer.step()
      ?
         torch.save(net1, 'net.pkl')  # 保存模型
         torch.save(net1.state_dict(), 'net_params.pkl')  # 保存所有節點
      ?
         plt.figure(1, figsize=(10, 3))
         plt.subplot(131)
         plt.title('Net1')
         plt.scatter(x.data.numpy(), y.data.numpy())
         plt.plot(x.data.numpy(), prediction.data.numpy(), 'r-', lw=5)
      ?
      def restore_net():
         net2 = torch.load('net.pkl')
         prediction = net2(x)
         plt.subplot(132)
         plt.title('Net2')
         plt.scatter(x.data.numpy(), y.data.numpy())
         plt.plot(x.data.numpy(), prediction.data.numpy(), 'r-', lw=5)
      ?
      def restore_params():
         net3 = torch.nn.Sequential(
             torch.nn.Linear(1, 10),
             torch.nn.ReLU(),
             torch.nn.Linear(10, 1),
        )
         net3.load_state_dict(torch.load('net_params.pkl'))
         prediction = net3(x)
         plt.subplot(133)
         plt.title('Net3')
         plt.scatter(x.data.numpy(), y.data.numpy())
         plt.plot(x.data.numpy(), prediction.data.numpy(), 'r-', lw=5)
         plt.show()
      ?
      save()
      restore_net()
      restore_params()
      ?
      #結果顯示

      5.3 批處理

      import torch
      import torch.utils.data as Data
      ?
      BATCH_SIZE = 5  # 一小批5個訓練
      ?
      x = torch.linspace(1, 10, 10)
      y = torch.linspace(10, 1, 10)
      ?
      torch_dataset = Data.TensorDataset(x, y)
      loader = Data.DataLoader(
         dataset=torch_dataset,
         batch_size=BATCH_SIZE,
         shuffle=True,
         num_workers=2,
      )  # shuffle就是定義是否打亂數據順序, num_workers就是用幾個線程進行提取
      ?
      def show_batch():
         for epoch in range(3):  # 總體訓練三次
             for step, (batch_x, batch_y) in enumerate(loader):
                 print('Epoch: ', epoch, '| Step: ', step, '| batch x: ', batch_x.numpy(), '| batch y: ', batch_y.numpy())
      ?
      if __name__ == '__main__':
         show_batch()
         
      #結果顯示
      Epoch:  0 | Step:  0 | batch x: [10.  1.  2.  9.  4.] | batch y: [ 1. 10.  9.  2.  7.]
      Epoch:  0 | Step:  1 | batch x: [5. 7. 6. 3. 8.] | batch y: [6. 4. 5. 8. 3.]
      Epoch:  1 | Step:  0 | batch x: [3. 1. 2. 7. 5.] | batch y: [ 8. 10.  9.  4.  6.]
      Epoch:  1 | Step:  1 | batch x: [10.  4.  9.  8.  6.] | batch y: [1. 7. 2. 3. 5.]
      Epoch:  2 | Step:  0 | batch x: [10.  7.  1.  5.  4.] | batch y: [ 1.  4. 10.  6.  7.]
      Epoch:  2 | Step:  1 | batch x: [9. 3. 8. 6. 2.] | batch y: [2. 8. 3. 5. 9.]

      5.3 優化器optimizer加速神經網絡

      • 所有的優化器都是更新我們神經網絡的參數,例傳統更新方法:

       
      • Adam方法

      m為下坡屬性,v為阻力屬性

       
      import torch
      import torch.utils.data as Data
      # from torch.autograd import Variable
      import matplotlib.pyplot as plt
      ?
      LR = 0.02
      BATH_SIZE = 32
      EPOCH = 12
      ?
      x = torch.unsqueeze(torch.linspace(-1, 1, 1000), dim=1)
      y = x.pow(2) + 0.1*torch.normal(torch.zeros(*x.size()))
      ?
      # plt.scatter(x.numpy(), y.numpy())
      # plt.show()
      ?
      torch_dataset = Data.TensorDataset(x, y)
      loader = Data.DataLoader(dataset=torch_dataset, batch_size=BATH_SIZE, shuffle=True, num_workers=2)
      ?
      # class Net(torch.nn.Module):
      #     def __init__(self, n_features=1, n_hidden=20 , n_output=1):
      #         super(Net, self).__init__()
      #         # 以上為固定的初始化
      #         self.hidden = torch.nn.Linear(n_features, n_hidden)
      #         self.predict = torch.nn.Linear(n_hidden, n_output)
      #
      #     def forward(self, x):
      #         x = torch.relu(self.hidden(x))
      #         x = self.predict(x)
      #         return x
      net = torch.nn.Sequential(
         torch.nn.Linear(1, 20),
         torch.nn.ReLU(),
         torch.nn.Linear(20, 1)
      )
      ?
      net_SGD = net
      # net_Momentum = net
      # net_RMSprop = net
      net_Adam = net
      nets = [net_SGD, net_Adam]
      ?
      opt_SGD = torch.optim.SGD(net_SGD.parameters(), lr=LR)
      # opt_Momentum = torch.optim.SGD(net_Momentum.parameters(), lr=LR, momentum=0.7)
      # opt_RMSprop = torch.optim.RMSprop(net_RMSprop.parameters(), lr=LR, alpha=0.9)
      opt_Adam = torch.optim.Adam(net_Adam.parameters(), lr=LR, betas=(0.9, 0.99))
      optimizers = [opt_SGD, opt_Adam]
      ?
      loss_func = torch.nn.MSELoss()
      losses_his = [[], []]
      ?
      def show_batch():
         for epoch in range(EPOCH):
             print(epoch)
             for step, (batch_x, batch_y) in enumerate(loader):
                 # b_x = Variable(batch_x)
                 # b_y = Variable(batch_y)
                 for net, opt, l_his in zip(nets, optimizers, losses_his):
                     output = net(batch_x)
                     loss = loss_func(output, batch_y)
                     opt.zero_grad()
                     loss.backward()
                     opt.step()
                     l_his.append(loss.item())
                     # print('1111', l_his)
      ?
         labels = ['SGD', 'Adam']
         for i, l_his in enumerate(losses_his):
             plt.plot(l_his, label=labels[i])
         plt.legend(loc='best')
         plt.xlabel('Steps')
         plt.ylabel('Loss')
         plt.ylim((0, 0.2))
         plt.show()
      ?
      if __name__ == '__main__':
         show_batch()
      #結果顯示

      6 神經網絡分類

      • CNN 卷積神經網絡

      import torch
      import torch.nn as nn
      import torch.utils.data as Data
      import torchvision
      import matplotlib.pyplot as plt
      ?
      EPOCH = 1
      BATCH_SIZE = 50
      LR = 0.001
      DOWNLOAD_MNIST = True
      ?
      train_data = torchvision.datasets.MNIST(
         root='./mnist',
         train=True,
         transform=torchvision.transforms.ToTensor(),  # 將三維數據壓縮成二維的(0, 1)
         download=DOWNLOAD_MNIST
      )
      # print(train_data.data.size())
      # print(train_data.targets.size())
      # plt.imshow(train_data.data[0].numpy(), cmap='gray')
      # plt.title('%i' % train_data.targets[0])
      # plt.show()
      ?
      train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)
      ?
      test_data = torchvision.datasets.MNIST(root='./mnist/', train=False)
      test_x = torch.unsqueeze(test_data.data, dim=1).type(torch.FloatTensor)[:2000]/255.
      test_y = test_data.targets[:2000]
      ?
      class CNN(nn.Module):
         def __init__(self):
             super(CNN, self).__init__()
             self.conv1 = nn.Sequential(
                 nn.Conv2d(
                     in_channels=1,
                     out_channels=16,
                     kernel_size=5,
                     stride=1,
                     padding=2,  # padding=(kernel_size-1)/2
                ),
                 nn.ReLU(),
                 nn.MaxPool2d(kernel_size=2,),
            )
             self.conv2 = nn.Sequential(
                 nn.Conv2d(16, 32, 5, 1, 2),
                 nn.ReLU(),
                 nn.MaxPool2d(2)
            )
             self.out = nn.Linear(32 * 7 * 7, 10)
      ?
         def forward(self, x):
             x = self.conv1(x)
             x = self.conv2(x)
             x = x.view(x.size(0), -1)  # 這里的size就是conv2的輸出,-1就是展平
             output = self.out(x)
             return output
      ?
      cnn = CNN()
      ?
      optimizer = torch.optim.Adam(cnn.parameters(), lr=LR)
      loss_func = nn.CrossEntropyLoss()
      ?
      def show_batch():
         for epoch in range(EPOCH):
             print(epoch)
             for step, (batch_x, batch_y) in enumerate(train_loader):
                 # b_x = Variable(batch_x)
                 # b_y = Variable(batch_y)
                 output = cnn(batch_x)
                 loss = loss_func(output, batch_y)
                 optimizer.zero_grad()
                 loss.backward()
                 optimizer.step()
      ?
                 if step % 50 == 0:
                     test_output = cnn(test_x)
                     pred_y = torch.max(test_output, 1)[1].data.squeeze()
                     accuracy = sum(pred_y == test_y) / float(test_y.size(0))
                     print('Epoch: ', epoch, '| train loss: %.4f' % loss.item(), '| test accuracy: %2f' % accuracy)
         test_output = cnn(test_x[:10])
         pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
         print(pred_y, 'prediction number')
         print(test_y[:10].numpy(), 'real number')
      ?
      if __name__ == '__main__':
         show_batch()
      ?
      #結果顯示
      0
      Epoch:  0 | train loss: 2.2959 | test accuracy: 0.107000
      ……
      Epoch:  0 | train loss: 0.0895 | test accuracy: 0.981500
      [7 2 1 0 4 1 4 9 5 9] prediction number
      [7 2 1 0 4 1 4 9 5 9] real number
      ?
      Process finished with exit code 0
      ?
      • RNN 循環神經網絡(一般用在時間順序上)

      • LSTM 長短時記憶網絡(RNN的一種,就是加了輸入輸出與中斷三個門控單元)

      # 分類
      import torch
      from torch import nn
      import torchvision.datasets as dsets
      import torchvision.transforms as transforms
      import matplotlib.pyplot as plt
      import torch.utils.data as Data
      ?
      EPOCH = 1
      BATCH_SIZE = 64
      TIME_STEP = 28
      INPUT_SIZE = 28
      LR = 0.01
      DOWNLOAD_MNIST = False  # 如果下載了mnist數據集則為false,沒有則設置為true
      ?
      train_data = dsets.MNIST(root='./mnist', train=True, transform=transforms.ToTensor(), download=DOWNLOAD_MNIST)
      train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)
      ?
      test_data = dsets.MNIST(root='./mnist/', train=False)
      test_x = test_data.data.type(torch.FloatTensor)[:2000]/255.
      test_y = test_data.targets.numpy().squeeze()[:2000]
      ?
      class RNN(nn.Module):
         def __init__(self):
             super(RNN, self).__init__()
      ?
             self.rnn = nn.LSTM(
                 input_size=INPUT_SIZE,
                 hidden_size=64,
                 num_layers=1,  # hidden層數
                 batch_first=True,  # (batch, time_step, input)默認形式
            )
             self.out = nn.Linear(64, 10)
      ?
         def forward(self, x):
             r_out, (h_n, h_c) = self.rnn(x, None)  # h_n與h_c表示分線程與主線程的隱藏層,None表示第一個隱藏層是否有
             out = self.out(r_out[:, -1, :])
             return out
      ?
      rnn = RNN()
      print(rnn)
      ?
      # 訓練
      optimizer = torch.optim.Adam(rnn.parameters(), lr=LR)
      loss_func = nn.CrossEntropyLoss()
      ?
      def show_batch():
         for epoch in range(EPOCH):
             for step, (x, y) in enumerate(train_loader):
                 output = rnn(x.view(-1, 28, 28))
                 loss = loss_func(output, y)
                 optimizer.zero_grad()  # 清零
                 loss.backward()
                 optimizer.step()  # 優化器優化
      ?
                 if step % 50 == 0:
                     test_output = rnn(test_x)
                     pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
                     accuracy = sum(pred_y == test_y) / test_y.size
                     print('Epoch: ', epoch, '| train loss: %.4f' % loss.item(), '| test accuracy: %2f' % accuracy)
      ?
         test_output = rnn(test_x[:10].view(-1, 28, 28))
         pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
         print(pred_y, 'prediction number')
         print(test_y[:10], 'real number')
      ?
      if __name__ == '__main__':
         show_batch()
         
      #結果顯示
      Epoch:  0 | train loss: 2.2838 | test accuracy: 0.089500
      Epoch:  0 | train loss: 0.9505 | test accuracy: 0.600500
      ……
      Epoch:  0 | train loss: 0.1406 | test accuracy: 0.946000
      [7 2 1 0 4 1 4 9 5 9] prediction number
      [7 2 1 0 4 1 4 9 5 9] real number
      # 回歸
      import torch
      from torch import nn
      import numpy as np
      import matplotlib.pyplot as plt
      import torch.utils.data as Data
      ?
      torch.manual_seed(1)  # 設置一個種子,讓每個訓練的網絡初始化相同
      ?
      TIME_STEP = 10
      INPUT_SIZE = 1
      LR = 0.02
      ?
      # steps = np.linspace(0, np.pi*2, 100, dtype=np.float32)
      # x_np = np.sin(steps)
      # y_np = np.cos(steps)
      # plt.plot(steps, y_np, 'r-', label='target (cos)')
      # plt.plot(steps, x_np, 'b-', label='input (sin)')
      # plt.legend(loc='best')
      # plt.show()
      ?
      class RNN(nn.Module):
         def __init__(self):
             super(RNN, self).__init__()
      ?
             self.rnn = nn.RNN(
                 input_size=INPUT_SIZE,
                 hidden_size=32,
                 num_layers=1,  # hidden層數
                 batch_first=True,  # (batch, time_step, input)默認形式
            )
             self.out = nn.Linear(32, 1)
      ?
         def forward(self, x, h_state):
             r_out, h_state = self.rnn(x, h_state)  # x包含很多步的,h_state只包含一步
             outs = []
             for time_step in range(r_out.size(1)):
                 outs.append(self.out(r_out[:, time_step, :]))
             return torch.stack(outs, dim=1), h_state  #
      ?
      rnn = RNN()
      print(rnn)
      ?
      # 訓練
      optimizer = torch.optim.Adam(rnn.parameters(), lr=LR)
      loss_func = nn.MSELoss()
      ?
      plt.figure(1, figsize=(12, 5))
      plt.ion()
      ?
      h_state = None
      for step in range(60):
         start, end = step * np.pi, (step + 1) * np.pi
         steps = np.linspace(start, end, TIME_STEP, dtype=np.float32)
         x_np = np.sin(steps)
         y_np = np.cos(steps)
         x = torch.from_numpy(x_np[np.newaxis, :, np.newaxis])
         y = torch.from_numpy(y_np[np.newaxis, :, np.newaxis])
      ?
         prediction, h_state = rnn(x, h_state)
         h_state = h_state.data  #
         loss = loss_func(prediction, y)
         optimizer.zero_grad()
         loss.backward()
         optimizer.step()
      ?
         plt.plot(steps, y_np.flatten(), 'r-')
         plt.plot(steps, prediction.data.numpy().flatten(), 'b-')
         plt.draw()
         plt.pause(0.05)
      ?
      plt.ioff()
      plt.show()
      ?
      #結果顯示

       

      posted @ 2020-11-11 18:33  代碼界的小菜鳥  閱讀(950)  評論(0)    收藏  舉報
      主站蜘蛛池模板: 少妇被躁爽到高潮| 激情在线一区二区三区视频| 无遮高潮国产免费观看| 成人无码午夜在线观看| 滨州市| 18禁一区二区每日更新| 好看的国产精品自拍视频| 欧美色欧美亚洲高清在线视频| 中文字幕日韩有码av| 国产欧美va欧美va在线| 99久久婷婷国产综合精品青草漫画| 国产精久久一区二区三区| 少妇人妻偷人精品无码视频| 欧美性色黄大片www喷水| 日本边添边摸边做边爱| 亚洲中文字幕伊人久久无码| 激情啪啪啪一区二区三区| 精品无码黑人又粗又大又长| 午夜福利高清在线观看| 麻豆精产国品一二三区区| 欧美精品一产区二产区| 亚洲老熟女一区二区三区| 久久日韩精品一区二区五区| 亚洲有无码中文网| 精品国产成人国产在线观看| 亚洲中文字字幕精品乱码| 免费夜色污私人影院在线观看| 久久天天躁狠狠躁夜夜婷| 色丁香一区二区黑人巨大| 亚洲国产成人精品女人久久久| Y111111国产精品久久久| 91老肥熟女九色老女人| 午夜在线不卡| 麻豆一区二区三区精品视频| 国产精品偷乱一区二区三区| 亚洲婷婷综合色高清在线| 2022最新国产在线不卡a| 国产成人精品一区二区三区| 国产精品视频全国免费观看| 狠狠色综合久久丁香婷婷| 国产精品一区二区三区日韩|