00131 问答 windows11
前言
本文介绍了如何进行问答。
Hugging Face Github 主页: https://github.com/huggingface
Question answering tasks return an answer given a question. There are two common types of question answering tasks:
- Extractive: extract the answer from the given context.
- Abstractive: generate an answer from the context that correctly answers the question.
This guide will show you how to:
- Finetune DistilBERT on the SQuAD dataset for extractive question answering.
- Use your finetuned model for inference.
The task illustrated in this tutorial is supported by the following model architectures:
ALBERT, BART, BERT, BigBird, BigBird-Pegasus, BLOOM, CamemBERT, CANINE, ConvBERT, Data2VecText, DeBERTa, DeBERTa-v2, DistilBERT, ELECTRA, ERNIE, ErnieM, Falcon, FlauBERT, FNet, Funnel Transformer, OpenAI GPT-2, GPT Neo, GPT NeoX, GPT-J, I-BERT, LayoutLMv2, LayoutLMv3, LED, LiLT, LLaMA, Longformer, LUKE, LXMERT, MarkupLM, mBART, MEGA, Megatron-BERT, MobileBERT, MPNet, MPT, MRA, MT5, MVP, Nezha, Nyströmformer, OPT, QDQBert, Reformer, RemBERT, RoBERTa, RoBERTa-PreLayerNorm, RoCBert, RoFormer, Splinter, SqueezeBERT, T5, UMT5, XLM, XLM-RoBERTa, XLM-RoBERTa-XL, XLNet, X-MOD, YOSO
Before you begin, make sure you have all the necessary libraries installed:
1 | pip install transformers datasets evaluate |
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
1 | from huggingface_hub import notebook_login |
操作系统:Windows 11 家庭中文版
参考文档
Load SQuAD dataset
Start by loading a smaller subset of the SQuAD dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
1 | from datasets import load_dataset |
Split the dataset’s train
split into a train and test set with the [~datasets.Dataset.train_test_split
] method:
1 | 0.2) squad = squad.train_test_split(test_size= |
Then take a look at an example:
1 | "train"][0] squad[ |
There are several important fields here:
answers
: the starting location of the answer token and the answer text.context
: background information from which the model needs to extract the answer.question
: the question a model should answer.
Preprocess
The next step is to load a DistilBERT tokenizer to process the question
and context
fields:
1 | from transformers import AutoTokenizer |
There are a few preprocessing steps particular to question answering tasks you should be aware of:
- Some examples in a dataset may have a very long
context
that exceeds the maximum input length of the model. To deal with longer sequences, truncate only thecontext
by settingtruncation="only_second"
. - Next, map the start and end positions of the answer to the original
context
by settingreturn_offset_mapping=True
. - With the mapping in hand, now you can find the start and end tokens of the answer. Use the [
~tokenizers.Encoding.sequence_ids
] method to
find which part of the offset corresponds to thequestion
and which corresponds to thecontext
.
Here is how you can create a function to truncate and map the start and end tokens of the answer
to the context
:
1 | def preprocess_function(examples): |
To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.Dataset.map
] function. You can speed up the map
function by setting batched=True
to process multiple elements of the dataset at once. Remove any columns you don’t need:
1 | map(preprocess_function, batched=True, remove_columns=squad["train"].column_names) tokenized_squad = squad. |
Now create a batch of examples using [DefaultDataCollator
]. Unlike other data collators in 🤗 Transformers, the [DefaultDataCollator
] does not apply any additional preprocessing such as padding.
1 | from transformers import DefaultDataCollator |
Train
You’re ready to start training your model now! Load DistilBERT with [AutoModelForQuestionAnswering
]:
1 | from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer |
At this point, only three steps remain:
- Define your training hyperparameters in [
TrainingArguments
]. The only required parameter isoutput_dir
which specifies where to save your model. You’ll push this model to the Hub by settingpush_to_hub=True
(you need to be signed in to Hugging Face to upload your model). - Pass the training arguments to [
Trainer
] along with the model, dataset, tokenizer, and data collator. - Call [
~Trainer.train
] to finetune your model.
1 | training_args = TrainingArguments( |
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub
] method so everyone can use your model:
1 | trainer.push_to_hub() |
For a more in-depth example of how to finetune a model for question answering, take a look at the corresponding PyTorch notebook or TensorFlow notebook.
Evaluate
Evaluation for question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [Trainer
] still calculates the evaluation loss during training so you’re not completely in the dark about your model’s performance.
If have more time and you’re interested in how to evaluate your model for question answering, take a look at the Question answering chapter from the 🤗 Hugging Face Course!
Inference
Come up with a question and some context you’d like the model to predict:
1 | "How many programming languages does BLOOM support?" question = |
The simplest way to try out your finetuned model for inference is to use it in a [pipeline
]. Instantiate a pipeline
for question answering with your model, and pass your text to it:
1 | from transformers import pipeline |
You can also manually replicate the results of the pipeline
if you’d like:
Tokenize the text and return PyTorch tensors:
1 | from transformers import AutoTokenizer |
Pass your inputs to the model and return the logits
:
1 | import torch |
Get the highest probability from the model output for the start and end positions:
1 | answer_start_index = outputs.start_logits.argmax() |
Decode the predicted tokens to get the answer:
1 | 0, answer_start_index : answer_end_index + 1] predict_answer_tokens = inputs.input_ids[ |
结语
第一百三十一篇博文写完,开心!!!!
今天,也是充满希望的一天。