00126 微调预训练模型 windows11
前言
本文介绍了如何微调预训练模型。
Hugging Face Github 主页: https://github.com/huggingface
When you use a pretrained model, you train it on a dataset specific to your task. This is known as fine-tuning. In this tutorial, you will fine-tune a pretrained model with a deep learning framework of your choice:
- Fine-tune a pretrained model with 🤗 Transformers [
Trainer
]. - Fine-tune a pretrained model in native PyTorch.
操作系统:Windows 11 家庭中文版
参考文档
Prepare a dataset
Begin by loading the Yelp Reviews dataset:
1 | from datasets import load_dataset |
As you now know, you need a tokenizer to process the text and include a padding and truncation strategy to handle any variable sequence lengths. To process your dataset in one step, use 🤗 Datasets map
method to apply a preprocessing function over the entire dataset:
1 | from transformers import AutoTokenizer |
If you like, you can create a smaller subset of the full dataset to fine-tune on to reduce the time it takes:
1 | "train"].shuffle(seed=42).select(range(1000)) small_train_dataset = tokenized_datasets[ |
Train with PyTorch Trainer
The [Trainer
] API supports a wide range of training options and features such as logging, gradient accumulation, and mixed precision.
Start by loading your model and specify the number of expected labels. From the Yelp Review dataset card, you know there are five labels:
1 | from transformers import AutoModelForSequenceClassification |
You will see a warning about some of the pretrained weights not being used and some weights being randomly initialized. Don’t worry, this is completely normal! The pretrained head of the BERT model is discarded, and replaced with a randomly initialized classification head. You will fine-tune this new model head on your sequence classification task, transferring the knowledge of the pretrained model to it.
Training hyperparameters
Next, create a [TrainingArguments
] class which contains all the hyperparameters you can tune as well as flags for activating different training options. For this tutorial you can start with the default training hyperparameters, but feel free to experiment with these to find your optimal settings.
Specify where to save the checkpoints from your training:
1 | from transformers import TrainingArguments |
Evaluate
[Trainer
] does not automatically evaluate model performance during training. You’ll need to pass [Trainer
] a function to compute and report metrics. The 🤗 Evaluate library provides a simple accuracy
function you can load with the [evaluate.load
] (see this quicktour for more information) function:
1 | import numpy as np |
Call [~evaluate.compute
] on metric
to calculate the accuracy of your predictions. Before passing your predictions to compute
, you need to convert the logits to predictions (remember all 🤗 Transformers models return logits):
1 | def compute_metrics(eval_pred): |
If you’d like to monitor your evaluation metrics during fine-tuning, specify the evaluation_strategy
parameter in your training arguments to report the evaluation metric at the end of each epoch:
1 | from transformers import TrainingArguments, Trainer |
Trainer
Create a [Trainer
] object with your model, training arguments, training and test datasets, and evaluation function:
1 | trainer = Trainer( |
Then fine-tune your model by calling [~transformers.Trainer.train
]:
1 | trainer.train() |
Train in native PyTorch
At this point, you may need to restart your notebook or execute the following code to free some memory:
1 | del model |
Next, manually postprocess tokenized_dataset
to prepare it for training.
-
Remove the
text
column because the model does not accept raw text as an input:1
"text"]) tokenized_datasets = tokenized_datasets.remove_columns([
-
Rename the
label
column tolabels
because the model expects the argument to be namedlabels
:1
"label", "labels") tokenized_datasets = tokenized_datasets.rename_column(
-
Set the format of the dataset to return PyTorch tensors instead of lists:
1
"torch") tokenized_datasets.set_format(
Then create a smaller subset of the dataset as previously shown to speed up the fine-tuning:
1 | "train"].shuffle(seed=42).select(range(1000)) small_train_dataset = tokenized_datasets[ |
DataLoader
Create a DataLoader
for your training and test datasets so you can iterate over batches of data:
1 | from torch.utils.data import DataLoader |
Load your model with the number of expected labels:
1 | from transformers import AutoModelForSequenceClassification |
Optimizer and learning rate scheduler
Create an optimizer and learning rate scheduler to fine-tune the model. Let’s use the AdamW
optimizer from PyTorch:
1 | from torch.optim import AdamW |
Create the default learning rate scheduler from [Trainer
]:
1 | from transformers import get_scheduler |
Lastly, specify device
to use a GPU if you have access to one. Otherwise, training on a CPU may take several hours instead of a couple of minutes.
1 | import torch |
Great, now you are ready to train! 🥳
Training loop
To keep track of your training progress, use the tqdm library to add a progress bar over the number of training steps:
1 | from tqdm.auto import tqdm |
Evaluate
Just like how you added an evaluation function to [Trainer
], you need to do the same when you write your own training loop. But instead of calculating and reporting the metric at the end of each epoch, this time you’ll accumulate all the batches with [~evaluate.add_batch
] and calculate the metric at the very end.
1 | import evaluate |
Additional resources
For more fine-tuning examples, refer to:
-
🤗 Transformers Examples includes scripts
to train common NLP tasks in PyTorch and TensorFlow. -
🤗 Transformers Notebooks contains various notebooks on how to fine-tune a model for specific tasks in PyTorch and TensorFlow.
结语
第一百二十六篇博文写完,开心!!!!
今天,也是充满希望的一天。