欢迎访问宙启技术站
智能推送

Python实现BertAdam()算法用于中文文本分类

发布时间:2023-12-11 14:23:55

BertAdam算法是Adam优化器的一种变种,专门用于Bert模型的训练。在中文文本分类任务中,我们可以使用BertAdam算法来提高模型的训练效果。下面是一个使用Python实现的BertAdam算法的例子。

首先,我们需要安装需要的库。BertAdam算法需要使用PyTorch和transformers库。

pip install torch
pip install transformers

接下来,我们需要加载Bert模型和tokenizer。这里我们以bert-base-chinese为例。

from transformers import BertTokenizer, BertModel

tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
model = BertModel.from_pretrained('bert-base-chinese')

接下来,我们需要加载训练集和测试集的数据。这里以一个简单的文本分类任务为例,数据集保存在train.txt和test.txt文件中,每行是一个样本文本和对应的标签,以制表符分隔。

def load_dataset(file_path):
    texts = []
    labels = []
    with open(file_path, 'r', encoding='utf-8') as f:
        for line in f:
            text, label = line.strip().split('\t')
            texts.append(text)
            labels.append(int(label))
    return texts, labels

train_texts, train_labels = load_dataset('train.txt')
test_texts, test_labels = load_dataset('test.txt')

然后,我们需要对文本进行tokenize和编码处理。

def encode_texts(texts, tokenizer):
    input_ids = []
    attention_masks = []
    for text in texts:
        encoded = tokenizer.encode_plus(
            text,
            add_special_tokens=True,
            max_length=128,
            pad_to_max_length=True,
            return_attention_mask=True,
            return_tensors='pt'
        )
        input_ids.append(encoded['input_ids'])
        attention_masks.append(encoded['attention_mask'])
    input_ids = torch.cat(input_ids, dim=0)
    attention_masks = torch.cat(attention_masks, dim=0)
    return input_ids, attention_masks

train_input_ids, train_attention_masks = encode_texts(train_texts, tokenizer)
test_input_ids, test_attention_masks = encode_texts(test_texts, tokenizer)

接下来,我们需要定义一个数据集类,用于加载训练集和测试集的数据。

from torch.utils.data import Dataset

class TextDataset(Dataset):
    def __init__(self, input_ids, attention_masks, labels):
        self.input_ids = input_ids
        self.attention_masks = attention_masks
        self.labels = labels
        
    def __len__(self):
        return len(self.labels)
    
    def __getitem__(self, idx):
        input_id = self.input_ids[idx]
        attention_mask = self.attention_masks[idx]
        label = self.labels[idx]
        return input_id, attention_mask, label

train_dataset = TextDataset(train_input_ids, train_attention_masks, train_labels)
test_dataset = TextDataset(test_input_ids, test_attention_masks, test_labels)

然后,我们可以定义一个Bert分类模型,并使用BertAdam算法进行训练。

import torch
import torch.nn as nn
from transformers import BertPreTrainedModel, BertModel, BertConfig

class BertClassifier(BertPreTrainedModel):
    def __init__(self, config):
        super(BertClassifier, self).__init__(config)
        self.bert = BertModel(config)
        self.dropout = nn.Dropout(config.hidden_dropout_prob)
        self.classifier = nn.Linear(config.hidden_size, config.num_labels)
        
    def forward(self, input_ids, attention_mask=None):
        _, pooled_output = self.bert(input_ids, attention_mask=attention_mask)
        pooled_output = self.dropout(pooled_output)
        logits = self.classifier(pooled_output)
        return logits

model = BertClassifier.from_pretrained('bert-base-chinese')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)

optimizer = transformers.AdamW(model.parameters(), lr=2e-5)

然后,我们可以定义训练和评估函数。

def train(model, dataloader, optimizer, device):
    model.train()
    total_loss = 0
    for input_ids, attention_masks, labels in dataloader:
        input_ids = input_ids.to(device)
        attention_masks = attention_masks.to(device)
        labels = labels.to(device)
        optimizer.zero_grad()
        loss, _ = model(input_ids=input_ids, attention_mask=attention_masks, labels=labels)
        loss.backward()
        torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
        optimizer.step()
        total_loss += loss.item()
    return total_loss / len(dataloader)

def evaluate(model, dataloader, device):
    model.eval()
    total_loss = 0
    correct = 0
    with torch.no_grad():
        for input_ids, attention_masks, labels in dataloader:
            input_ids = input_ids.to(device)
            attention_masks = attention_masks.to(device)
            labels = labels.to(device)
            loss, logits = model(input_ids=input_ids, attention_mask=attention_masks, labels=labels)
            total_loss += loss.item()
            _, predicted = torch.max(logits, 1)
            correct += (predicted == labels).sum().item()
    return total_loss / len(dataloader), correct / len(dataloader.dataset)

最后,我们可以进行模型的训练和评估。

from torch.utils.data import DataLoader

train_dataloader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_dataloader = DataLoader(test_dataset, batch_size=32, shuffle=False)

num_epochs = 5
for epoch in range(num_epochs):
    train_loss = train(model, train_dataloader, optimizer, device)
    test_loss, test_accuracy = evaluate(model, test_dataloader, device)
    print('Epoch: {}, Train Loss: {:.4f}, Test Loss: {:.4f}, Test Accuracy: {:.4f}'.format(epoch+1, train_loss, test_loss, test_accuracy))

以上就是使用Python实现BertAdam算法用于中文文本分类的例子。你可以根据自己的数据集和模型进行相应的调整和修改。注意,该例子仅供参考,请根据具体情况适配你的代码。