如何在Python中使用SummaryWriter()生成实验结果总结
在Python中使用SummaryWriter()生成实验结果总结可以通过TensorBoard进行可视化和记录实验结果,使得实验过程和结果更加清晰,方便分析和比较不同实验的效果。下面是使用SummaryWriter()生成实验结果总结的详细步骤和使用例子。
步骤1:导入必要的库和模块
首先,需要导入必要的库和模块,包括torch、torchvision和torch.utils.tensorboard等。torchvision用于加载数据集,torch用于创建模型和计算损失,torch.utils.tensorboard则是用于创建SummaryWriter()对象和将结果写入文件。
import torch import torchvision from torch.utils.tensorboard import SummaryWriter
步骤2:准备数据集
接下来,需要准备数据集。这里以FashionMNIST数据集为例。如下所示,使用torchvision的datasets模块从FashionMNIST中加载训练集和测试集,并创建相应的数据加载器。
# 加载训练集和测试集 train_dataset = torchvision.datasets.FashionMNIST(root='./data', train=True, transform=torchvision.transforms.ToTensor(), download=True) test_dataset = torchvision.datasets.FashionMNIST(root='./data', train=False, transform=torchvision.transforms.ToTensor(), download=True) # 创建数据加载器 train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64, shuffle=False)
步骤3:创建模型
然后,需要创建模型。这里以一个简单的卷积神经网络为例。如下所示,使用torch.nn模块创建一个包含两个卷积层和两个全连接层的模型。
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
self.fc1 = torch.nn.Linear(320, 50)
self.fc2 = torch.nn.Linear(50, 10)
def forward(self, x):
x = torch.nn.functional.relu(self.conv1(x))
x = torch.nn.functional.max_pool2d(x, 2)
x = torch.nn.functional.relu(self.conv2(x))
x = torch.nn.functional.max_pool2d(x, 2)
x = x.view(-1, 320)
x = torch.nn.functional.relu(self.fc1(x))
x = self.fc2(x)
return x
# 创建模型实例
model = Net()
步骤4:指定损失函数和优化器
然后,需要指定损失函数和优化器。这里使用交叉熵损失和SGD优化器。
# 指定损失函数和优化器 criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
步骤5:创建SummaryWriter()对象
接下来,需要创建SummaryWriter()对象。SummaryWriter()对象用于将实验结果写入文件,以供TensorBoard进行可视化。
# 创建SummaryWriter()对象 writer = SummaryWriter()
步骤6:训练模型,并记录实验结果
然后,开始训练模型,并在训练过程中记录实验结果。如下所示,每个训练迭代之后,通过调用writer.add_scalar()方法记录训练损失和准确率,并使用writer.add_histogram()方法记录模型参数的分布。
# 开始训练模型,并记录实验结果
for epoch in range(10):
running_loss = 0.0
running_corrects = 0
total = 0
for inputs, labels in train_loader:
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
total += inputs.size(0)
epoch_loss = running_loss / total
epoch_acc = running_corrects.double() / total
writer.add_scalar('Loss/train', epoch_loss, epoch)
writer.add_scalar('Accuracy/train', epoch_acc.item(), epoch)
for name, param in model.named_parameters():
writer.add_histogram(name, param.clone().cpu().data.numpy(), epoch)
步骤7:测试模型,并记录实验结果
在训练完成后,需要对模型进行测试,并记录测试结果。如下所示,对测试集进行测试,并通过调用writer.add_scalar()方法记录测试准确率。
# 对测试集进行测试,并记录实验结果
model.eval()
running_corrects = 0
total = 0
for inputs, labels in test_loader:
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
running_corrects += torch.sum(preds == labels.data)
total += inputs.size(0)
test_acc = running_corrects.double() / total
writer.add_scalar('Accuracy/test', test_acc.item(), epoch)
步骤8:关闭SummaryWriter()对象
最后,需要关闭SummaryWriter()对象,释放资源。
# 关闭SummaryWriter()对象 writer.close()
使用例子:
下面是一个完整的使用例子,详细演示了如何在Python中使用SummaryWriter()生成实验结果总结,并使用TensorBoard进行可视化。
`python
import torch
import torchvision
from torch.utils.tensorboard import SummaryWriter
# 加载训练集和测试集
train_dataset = torchvision.datasets.FashionMNIST(root='./data', train=True, transform=torchvision.transforms.ToTensor(), download=True)
test_dataset = torchvision.datasets.FashionMNIST(root='./data', train=False, transform=torchvision.transforms.ToTensor(), download=True)
# 创建数据加载器
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64, shuffle=False)
# 创建模型
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
self.fc1 = torch.nn.Linear(320, 50)
self.fc2 = torch.nn.Linear(50, 10)
def forward(self, x):
x = torch.nn.functional.relu(self.conv1(x))
x = torch.nn.functional.max_pool2d(x, 2)
x = torch.nn.functional.relu(self.conv2(x))
x = torch.nn.functional.max_pool2d(x, 2)
x = x.view(-1, 320)
x = torch.nn.functional.relu(self.fc1(x))
x = self.fc2(x)
return x
# 创建模型实例
model = Net()
# 指定损失函数和优化器
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# 创建SummaryWriter()对象
writer = SummaryWriter()
# 开始训练模型,并记录实验结果
for epoch in range(10):
running_loss = 0.0
running_corrects = 0
total = 0
for inputs, labels in train_loader:
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
total += inputs.size(0)
epoch_loss = running_loss / total
epoch_acc = running_corrects.double() / total
writer.add_scalar('Loss/train', epoch_loss, epoch)
writer.add_scalar('Accuracy/train', epoch_acc.item(), epoch)
for name, param in model.named_parameters():
writer.add_histogram(name, param.clone().cpu().data.numpy(), epoch)
# 对测试集进行测试,并记录实验结果
model.eval()
running_corrects = 0
total = 0
for inputs, labels in test_loader:
outputs = model(inputs)
_, preds = torch.max(outputs,
