Python实现LeNet网络训练与测试
发布时间:2023-12-17 08:10:47
LeNet是由Yann LeCun教授提出的一个经典的卷积神经网络架构,具有较深远的影响。它在1998年被应用于手写数字识别任务上,并取得了优秀的分类准确率。以下是使用Python实现LeNet网络的训练与测试的例子。
首先,我们需要导入必要的库和模块:
import torch import torch.nn as nn import torch.optim as optim from torchvision import datasets, transforms
接下来,我们定义LeNet网络的架构。LeNet网络由卷积层、池化层和全连接层组成。在本例中,我们使用的是经典的LeNet-5架构,并添加了Dropout层以防止过拟合:
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 6, kernel_size=5)
self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
self.fc1 = nn.Linear(16 * 4 * 4, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.dropout = nn.Dropout(0.5)
def forward(self, x):
x = nn.functional.relu(self.conv1(x))
x = nn.functional.max_pool2d(x, 2)
x = nn.functional.relu(self.conv2(x))
x = nn.functional.max_pool2d(x, 2)
x = x.view(-1, 16 * 4 * 4)
x = nn.functional.relu(self.fc1(x))
x = self.dropout(x)
x = nn.functional.relu(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
接下来,我们定义数据预处理的操作,包括将图像转换为Tensor、归一化并添加数据增强的操作,提高网络的泛化能力:
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
然后,我们加载MNIST数据集,作为训练集和测试集:
train_dataset = datasets.MNIST(root='./data', train=True, transform=transform, download=True) test_dataset = datasets.MNIST(root='./data', train=False, transform=transform)
接下来,我们定义训练和测试的代码。首先,我们定义训练函数,其中包括前向传播、计算损失、反向传播和更新参数的过程:
def train(model, device, train_loader, optimizer, criterion, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
接下来,我们定义测试函数,其中包括前向传播和计算测试准确率的过程:
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += nn.functional.cross_entropy(output, target).item()
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('
Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)
'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
最后,我们定义主函数,设置一些超参数,并开始训练和测试过程:
def main():
batch_size = 64
epochs = 10
lr = 0.01
momentum = 0.5
seed = 1
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
torch.manual_seed(seed)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=1,
pin_memory=True
)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=1,
pin_memory=True
)
model = LeNet().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=lr, momentum=momentum)
for epoch in range(1, epochs + 1):
train(model, device, train_loader, optimizer, criterion, epoch)
test(model, device, test_loader)
if __name__ == '__main__':
main()
以上就是使用Python实现LeNet网络的训练与测试的例子。通过运行这个例子,我们可以对MNIST手写数字数据集进行分类,并得到分类准确率。这个例子可以让读者更好地理解LeNet网络的架构和相关的训练过程。同时,也可以通过修改超参数和网络结构,应用于其他分类任务上。
