Pythorch模型精度测试

2024-04-24 11:10:43 发布

您现在位置:Python中文网/ 问答频道 /正文

我用Pytorch对一系列图像进行分类。 NN定义如下:

model = models.vgg16(pretrained=True)
model.cuda()
for param in model.parameters(): param.requires_grad = False

classifier = nn.Sequential(OrderedDict([
                           ('fc1', nn.Linear(25088, 4096)),
                           ('relu', nn.ReLU()),
                           ('fc2', nn.Linear(4096, 102)),
                           ('output', nn.LogSoftmax(dim=1))
                           ]))

model.classifier = classifier

标准和优化器如下:

criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)

我的验证功能如下:

def validation(model, testloader, criterion):
    test_loss = 0
    accuracy = 0
    for images, labels in testloader:

        images.resize_(images.shape[0], 784)

        output = model.forward(images)
        test_loss += criterion(output, labels).item()

        ps = torch.exp(output)
        equality = (labels.data == ps.max(dim=1)[1])
        accuracy += equality.type(torch.FloatTensor).mean()

    return test_loss, accuracy

这是引发以下错误的代码:

RuntimeError: input has less dimensions than expected

epochs = 3
print_every = 40
steps = 0
running_loss = 0
testloader = dataloaders['test']

# change to cuda
model.to('cuda')

for e in range(epochs):
    running_loss = 0
    for ii, (inputs, labels) in enumerate(dataloaders['train']):
        steps += 1

        inputs, labels = inputs.to('cuda'), labels.to('cuda')

        optimizer.zero_grad()

        # Forward and backward passes
        outputs = model.forward(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()

        if steps % print_every == 0:
            model.eval()
            with torch.no_grad():
                test_loss, accuracy = validation(model, testloader, criterion)

            print("Epoch: {}/{}.. ".format(e+1, epochs),
                  "Training Loss: {:.3f}.. ".format(running_loss/print_every),
                  "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
                  "Test Accuracy: {:.3f}".format(accuracy/len(testloader)))

            running_loss = 0

有什么帮助吗?


Tags: intestforoutputlabelsmodelnnrunning
2条回答

以防万一。

如果您没有GPU系统(假设您正在笔记本电脑上开发,最终将在带有GPU的服务器上进行测试),您也可以使用以下方法执行相同的操作:

if torch.cuda.is_available():
        inputs =inputs.to('cuda')
    else:
        inputs = inputs.to('cuda')

另外,如果你想知道为什么有一个LogSoftmax,而不是Softmax,那是因为他使用NLLLoss作为他的损失函数。您可以阅读有关softmaxhere的更多信息

我需要更改验证函数如下:

def validation(model, testloader, criterion):
    test_loss = 0
    accuracy = 0

    for inputs, classes in testloader:
        inputs = inputs.to('cuda')
        output = model.forward(inputs)
        test_loss += criterion(output, labels).item()

        ps = torch.exp(output)
        equality = (labels.data == ps.max(dim=1)[1])
        accuracy += equality.type(torch.FloatTensor).mean()

    return test_loss, accuracy

需要将输入转换为“cuda”:inputs.to('cuda')

相关问题 更多 >