欢迎访问宙启技术站
智能推送

使用torchtext构建中文文本摘要模型的方法

发布时间:2023-12-25 05:35:08

在使用torchtext构建中文文本摘要模型之前,我们需要对数据进行预处理和转换。首先,我们需要将中文文本分词,并将其转换为数字表示。然后,我们可以使用torchtext创建数据集对象,并将数据集分成训练集、验证集和测试集。最后,我们可以使用这些数据集来构建模型。

以下是使用torchtext构建中文文本摘要模型的方法:

1. 安装torchtext库:

pip install torchtext

2. 预处理数据:

import jieba

def tokenize(text):
    return list(jieba.cut(text))

def preprocess(data_file, source_file, target_file):
    with open(data_file, 'r', encoding='utf-8') as f:
        lines = f.readlines()
    
    with open(source_file, 'w', encoding='utf-8') as f_source, open(target_file, 'w', encoding='utf-8') as f_target:
        for line in lines:
            line = line.strip()
            if len(line) > 0:
                text, summary = line.split('\t')
                source_tokens = tokenize(text)
                target_tokens = tokenize(summary)
                f_source.write(' '.join(source_tokens) + '
')
                f_target.write(' '.join(target_tokens) + '
')

在上述代码中,我们首先定义了一个tokenize函数,将文本进行分词。然后,我们定义了一个preprocess函数,该函数接受包含原始数据的文件路径、预处理后源文本的保存路径和预处理后目标文本的保存路径。函数读取原始数据文件,并对每行进行处理,将源文本和目标文本分别进行分词,并将处理后的结果写入到对应的文件中。

3. 构建数据集对象:

from torchtext.legacy.data import Field, TabularDataset

source_field = Field(tokenize=tokenize, init_token='<start>', eos_token='<end>', lower=True)
target_field = Field(tokenize=tokenize, init_token='<start>', eos_token='<end>', lower=True)

train_data = TabularDataset(path='train.txt', format='tsv', fields=[('source', source_field), ('target', target_field)])
valid_data = TabularDataset(path='valid.txt', format='tsv', fields=[('source', source_field), ('target', target_field)])
test_data = TabularDataset(path='test.txt', format='tsv', fields=[('source', source_field), ('target', target_field)])

在上述代码中,我们首先导入了FieldTabularDataset类。然后,我们定义了两个Field对象,一个用于表示源文本,一个用于表示目标文本。在定义Field对象时,我们指定了分词函数、起始标记和结束标记,并将文本转换为小写。接下来,我们使用TabularDataset类创建了训练集、验证集和测试集的数据集对象,指定了数据文件的路径和字段的名称。

4. 创建词汇表:

source_field.build_vocab(train_data)
target_field.build_vocab(train_data)

在上述代码中,我们调用build_vocab方法来创建源文本和目标文本的词汇表。该方法会根据训练集中出现的词汇来构建词汇表。我们可以设置参数min_freq来指定词汇在训练集中的最小出现次数。

5. 构建迭代器:

from torchtext.legacy.data import BucketIterator

batch_size = 32

train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
    (train_data, valid_data, test_data),
    batch_size=batch_size,
    sort_key=lambda x: len(x.source),
    sort_within_batch=True)

在上述代码中,我们首先导入了BucketIterator类。然后,我们调用BucketIterator.splits方法创建了训练集、验证集和测试集的迭代器对象。该方法会自动将数据集按照源文本的长度进行分组,并在每个批次内进行排序,以减少填充操作。

通过以上步骤,我们已经完成了使用torchtext构建中文文本摘要模型的准备工作。接下来,我们可以使用这些数据集和迭代器来构建模型、训练模型和评估模型了。

使用例子:

import torch
import torch.nn as nn
import torch.optim as optim
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence

class Seq2Seq(nn.Module):
    def __init__(self, input_dim, output_dim, hidden_dim, num_layers, dropout):
        super(Seq2Seq, self).__init__()
        self.input_dim = input_dim
        self.output_dim = output_dim
        self.hidden_dim = hidden_dim
        self.num_layers = num_layers
        self.dropout = dropout
        self.embedding = nn.Embedding(input_dim, hidden_dim)
        self.encoder = nn.LSTM(hidden_dim, hidden_dim, num_layers, dropout=self.dropout, batch_first=True)
        self.decoder = nn.LSTM(hidden_dim, hidden_dim, num_layers, dropout=self.dropout, batch_first=True)
        self.linear = nn.Linear(hidden_dim, output_dim)

    def forward(self, input_seqs, input_lengths, target_seqs, target_lengths):
        embedded = self.embedding(input_seqs)
        packed_embedded = pack_padded_sequence(embedded, input_lengths, batch_first=True)
        encoder_outputs, (encoder_hidden, encoder_cell) = self.encoder(packed_embedded)
        decoder_outputs, (decoder_hidden, decoder_cell) = self.decoder(encoder_outputs)
        padded_decoder_outputs, _ = pad_packed_sequence(decoder_outputs, batch_first=True)
        output_seqs = self.linear(padded_decoder_outputs)
        return output_seqs

input_dim = len(source_field.vocab)
output_dim = len(target_field.vocab)
hidden_dim = 256
num_layers = 2
dropout = 0.5

model = Seq2Seq(input_dim, output_dim, hidden_dim, num_layers, dropout)
optimizer = optim.Adam(model.parameters())
criterion = nn.CrossEntropyLoss(ignore_index=target_field.vocab.stoi['<pad>'])

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = model.to(device)
criterion = criterion.to(device)

def train(model, iterator, optimizer, criterion, clip):
    model.train()
    epoch_loss = 0
    for batch in iterator:
        input_seqs = batch.source
        input_lengths = batch.source_lengths
        target_seqs = batch.target
        target_lengths = batch.target_lengths

        optimizer.zero_grad()
        output_seqs = model(input_seqs, input_lengths, target_seqs, target_lengths)
        output_dim = output_seqs.shape[-1]
        output_seqs = output_seqs.reshape(-1, output_dim)
        target_seqs = target_seqs[:, 1:].reshape(-1)
        loss = criterion(output_seqs, target_seqs)
        loss.backward()
        torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
        optimizer.step()
        epoch_loss += loss.item()
    
    return epoch_loss / len(iterator)

def evaluate(model, iterator, criterion):
    model.eval()
    epoch_loss = 0
    with torch.no_grad():
        for batch in iterator:
            input_seqs = batch.source
            input_lengths = batch.source_lengths
            target_seqs = batch.target
            target_lengths = batch.target_lengths

            output_seqs = model(input_seqs, input_lengths, target_seqs, target_lengths)
            output_dim = output_seqs.shape[-1]
            output_seqs = output_seqs.reshape(-1, output_dim)
            target_seqs = target_seqs[:, 1:].reshape(-1)
            loss = criterion(output_seqs, target_seqs)
            epoch_loss += loss.item()
    
    return epoch_loss / len(iterator)
    
num_epochs = 10
clip = 1.0

for epoch in range(num_epochs):
    train_loss = train(model, train_iterator, optimizer, criterion, clip)
    valid_loss = evaluate(model, valid_iterator, criterion)
    print(f'Epoch: {epoch+1:02}, Train Loss: {train_loss:.4f}, Valid Loss: {valid_loss:.4f}')

在上述例子中,我们首先定义了一个Seq2Seq模型,用于实现序列到序列的转换。该模型包含了一个嵌入层、一个编码器LSTM和一个解码器LSTM。然后,我们定义了优化器和损失函数,并将模型和损失函数移动到GPU上(如果可用)。接下来,我们定义了训练和评估函数,分别用于训练模型和评