00132 因果语言建模 windows11
前言
本文介绍了如何进行因果语言建模。
Hugging Face Github 主页: https://github.com/huggingface
There are two types of language modeling, causal and masked. This guide illustrates causal language modeling.
Causal language models are frequently used for text generation.
Causal language modeling predicts the next token in a sequence of tokens, and the model can only attend to tokens on
the left. This means the model cannot see future tokens. GPT-2 is an example of a causal language model.
This guide will show you how to:
- Finetune DistilGPT2 on the r/askscience subset of the ELI5 dataset.
- Use your finetuned model for inference.
You can finetune other architectures for causal language modeling following the same steps in this guide.
Choose one of the following architectures:
BART, BERT, Bert Generation, BigBird, BigBird-Pegasus, BioGpt, Blenderbot, BlenderbotSmall, BLOOM, CamemBERT, CodeLlama, CodeGen, Cohere, CPM-Ant, CTRL, Data2VecText, ELECTRA, ERNIE, Falcon, Fuyu, Gemma, GIT, GPT-Sw3, OpenAI GPT-2, GPTBigCode, GPT Neo, GPT NeoX, GPT NeoX Japanese, GPT-J, LLaMA, Mamba, Marian, mBART, MEGA, Megatron-BERT, Mistral, Mixtral, MPT, MusicGen, MusicGen Melody, MVP, OpenLlama, OpenAI GPT, OPT, Pegasus, Persimmon, Phi, PLBart, ProphetNet, QDQBert, Qwen2, Reformer, RemBERT, RoBERTa, RoBERTa-PreLayerNorm, RoCBert, RoFormer, RWKV, Speech2Text2, StableLm, Starcoder2, Transformer-XL, TrOCR, Whisper, XGLM, XLM, XLM-ProphetNet, XLM-RoBERTa, XLM-RoBERTa-XL, XLNet, X-MOD
Before you begin, make sure you have all the necessary libraries installed:
1 | pip install transformers datasets evaluate |
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
1 | from huggingface_hub import notebook_login |
操作系统:Windows 11 家庭中文版
参考文档
Load ELI5 dataset
Start by loading the first 5000 examples from the ELI5-Category dataset with the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
1 | from datasets import load_dataset |
Split the dataset’s train
split into a train and test set with the [~datasets.Dataset.train_test_split
] method:
1 | 0.2) eli5 = eli5.train_test_split(test_size= |
Then take a look at an example:
1 | "train"][0] eli5[ |
While this may look like a lot, you’re only really interested in the text
field. What’s cool about language modeling
tasks is you don’t need labels (also known as an unsupervised task) because the next word is the label.
Preprocess
The next step is to load a DistilGPT2 tokenizer to process the text
subfield:
1 | from transformers import AutoTokenizer |
You’ll notice from the example above, the text
field is actually nested inside answers
. This means you’ll need to
extract the text
subfield from its nested structure with the flatten
method:
1 | eli5 = eli5.flatten() |
Each subfield is now a separate column as indicated by the answers
prefix, and the text
field is a list now. Instead
of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.
Here is a first preprocessing function to join the list of strings for each example and tokenize the result:
1 | def preprocess_function(examples): |
To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [~datasets.Dataset.map
] method. You can speed up the map
function by setting batched=True
to process multiple elements of the dataset at once, and increasing the number of processes with num_proc
. Remove any columns you don’t need:
1 | map( tokenized_eli5 = eli5. |
This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.
You can now use a second preprocessing function to
- concatenate all the sequences
- split the concatenated sequences into shorter chunks defined by
block_size
, which should be both shorter than the maximum input length and short enough for your GPU RAM.
1 | 128 block_size = |
Apply the group_texts
function over the entire dataset:
1 | map(group_texts, batched=True, num_proc=4) lm_dataset = tokenized_eli5. |
Now create a batch of examples using [DataCollatorForLanguageModeling
]. It’s more efficient to dynamically pad the
sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
Use the end-of-sequence token as the padding token and set mlm=False
. This will use the inputs as labels shifted to the right by one element:
1 | from transformers import DataCollatorForLanguageModeling |
Train
You’re ready to start training your model now! Load DistilGPT2 with [AutoModelForCausalLM
]:
1 | from transformers import AutoModelForCausalLM, TrainingArguments, Trainer |
At this point, only three steps remain:
- Define your training hyperparameters in [
TrainingArguments
]. The only required parameter isoutput_dir
which specifies where to save your model. You’ll push this model to the Hub by settingpush_to_hub=True
(you need to be signed in to Hugging Face to upload your model). - Pass the training arguments to [
Trainer
] along with the model, datasets, and data collator. - Call [
~Trainer.train
] to finetune your model.
1 | training_args = TrainingArguments( |
Once training is completed, use the [~transformers.Trainer.evaluate
] method to evaluate your model and get its perplexity:
1 | import math |
Then share your model to the Hub with the [~transformers.Trainer.push_to_hub
] method so everyone can use your model:
1 | trainer.push_to_hub() |
For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding PyTorch notebook or TensorFlow notebook.
Inference
Come up with a prompt you’d like to generate text from:
1 | "Somatic hypermutation allows the immune system to" prompt = |
The simplest way to try out your finetuned model for inference is to use it in a [pipeline
]. Instantiate a pipeline
for text generation with your model, and pass your text to it:
1 | from transformers import pipeline |
Tokenize the text and return the input_ids
as PyTorch tensors:
1 | from transformers import AutoTokenizer |
Use the [~transformers.generation_utils.GenerationMixin.generate
] method to generate text.
For more details about the different text generation strategies and parameters for controlling generation, check out the Text generation strategies page.
1 | from transformers import AutoModelForCausalLM |
Decode the generated token ids back into text:
1 | True) tokenizer.batch_decode(outputs, skip_special_tokens= |
结语
第一百三十二篇博文写完,开心!!!!
今天,也是充满希望的一天。