如何在Pytorch中实现dropout,以及在哪里应用i

2024-04-26 00:14:15 发布

您现在位置:Python中文网/ 问答频道 /正文

我不确定这是否正确。很遗憾,我找不到很多关于如何参数化神经网络的好例子。在

你觉得这两个班的退学方式怎么样。首先,我在写原始课程:

class NeuralNet(nn.Module):
  def __init__(self, input_size, hidden_size, num_classes, p = dropout):
      super(NeuralNet, self).__init__()
      self.fc1 = nn.Linear(input_size, hidden_size)
      self.fc2 = nn.Linear(hidden_size, hidden_size)
      self.fc3 = nn.Linear(hidden_size, num_classes)

  def forward(self, x):
      out = F.relu(self.fc1(x))
      out = F.relu(self.fc2(out))
      out = self.fc3(out)
      return out

然后在这里,我发现了两种不同的书写方式,我不知道如何区分。第一种方法使用:

^{pr2}$

而第二个:

self.dropout = nn.Dropout(p) 

我的结果是:

class NeuralNet(nn.Module):
  def __init__(self, input_size, hidden_size, num_classes, p = dropout):
      super(NeuralNet, self).__init__()
      self.fc1 = nn.Linear(input_size, hidden_size)
      self.fc2 = nn.Linear(hidden_size, hidden_size)
      self.fc3 = nn.Linear(hidden_size, num_classes)
      self.drop_layer = nn.Dropout(p=p)

  def forward(self, x):
      out = F.relu(self.fc1(x))
      out = F.relu(self.fc2(out))
      out = self.fc3(out)
      return out


 class NeuralNet(nn.Module):
  def __init__(self, input_size, hidden_size, num_classes, p = dropout):
      super(NeuralNet, self).__init__()
      self.fc1 = nn.Linear(input_size, hidden_size)
      self.fc2 = nn.Linear(hidden_size, hidden_size)
      self.fc3 = nn.Linear(hidden_size, num_classes)
      self.dropout = nn.Dropout(p) 

  def forward(self, x):
      out = F.relu(self.fc1(x))
      out = F.relu(self.fc2(out))
      out = self.fc3(out)
      return out

如果不是怎么改善的话,这能起到什么作用吗?它是否给了我我预期的结果,也就是说创造了一个神经网络,在那里我可以去掉一些神经元。重要细节,我只想去掉第二层神经网络,其余部分不做任何改动!


Tags: selfinputsizeinitdefnnoutnum
1条回答
网友
1楼 · 发布于 2024-04-26 00:14:15

您提供的两个示例完全相同。self.drop_layer = nn.Dropout(p=p)self.dropout = nn.Dropout(p)只是因为作者将层分配给不同的变量名。丢失层通常在Pytorch中的.__init__()方法中。像这样:

 class NeuralNet(nn.Module):
  def __init__(self, input_size, hidden_size, num_classes, p = dropout):
      super(NeuralNet, self).__init__()
      self.fc1 = nn.Linear(input_size, hidden_size)
      self.fc2 = nn.Linear(hidden_size, hidden_size)
      self.fc3 = nn.Linear(hidden_size, num_classes)
      self.dropout = nn.Dropout(p) 

  def forward(self, x):
      out = F.relu(self.fc1(x))
      out = F.relu(self.fc2(out))
      out = self.dropout(self.fc3(out))
      return out

您可以进行测试:

^{pr2}$
tensor(5440) # sum of nonzero values
tensor(2656) # sum on nonzero values after dropout

让我们想象一下:

import torch
import torch.nn as nn
input = torch.randn(5, 5)
print(input)
tensor([[ 1.1404,  0.2102, -0.1237,  0.4240,  0.0174],
        [-2.0872,  1.2790,  0.7804, -0.0962, -0.9730],
        [ 0.4788, -1.3408,  0.0483,  2.4125, -1.2463],
        [ 1.5761,  0.3592,  0.2302,  1.3980,  0.0154],
        [-0.4308,  0.2484,  0.8584,  0.1689, -1.3607]])

现在,让我们应用dropout:

m = nn.Dropout(p=0.5)
output = m(input)
print(output)
tensor([[ 0.0000,  0.0000, -0.0000,  0.8481,  0.0000],
        [-0.0000,  0.0000,  1.5608, -0.0000, -1.9459],
        [ 0.0000, -0.0000,  0.0000,  0.0000, -0.0000],
        [ 0.0000,  0.7184,  0.4604,  2.7959,  0.0308],
        [-0.0000,  0.0000,  0.0000,  0.0000, -0.0000]])

大约一半的神经元都变成了零,因为我们有概率p=0.5一个神经元被设置为零!在

相关问题 更多 >