# 问答任务

抽取式问答任务：给定一个问题和一段文本，从这段文本中找出能回答该问题的文本片段（span）。

例如：

![Widget inference representing the QA task](/files/-MiNIEr8eyT4zGdOLzUF)

## load数据集

```
datasets = load_dataset("squad_v2" if squad_v2 else "squad")
```

## 数据预处理

依旧是tokenizer的套路。

下面是生成mask的方式：

```python
# 如果我们想要看到tokenizer预处理之后的文本格式，我们仅使用tokenizer的tokenize方法，add special tokens意思是增加预训练模型所要求的特俗token。
print("单个文本tokenize: {}".format(tokenizer.tokenize("What is your name?"), add_special_tokens=True))
print("2个文本tokenize: {}".format(tokenizer.tokenize("My name is Sylvain.", add_special_tokens=True)))
# 预训练模型输入格式要求的输入为token IDs，还需要attetnion mask。可以使用下面的方法得到预训练模型格式所要求的输入。
```

`tokenizer`生成的token IDs也就是input\_ids一般来说随着预训练模型名字的不同而有所不同。原因是不同的预训练模型在预训练的时候设定了不同的规则。但只要**tokenizer和model的名字一致，那么tokenizer预处理的输入格式就会满足model需求的**。关于预处理更多内容参考[这个教程](https://huggingface.co/transformers/preprocessing.html)。

**预训练机器问答模型们是如何处理非常长的文本：**&#x4E00;般来说预训练模型输入有最大长度要求，所以我们通常将超长的输入进行截断。但是，如果我们将问答数据三元组中的超长context截断，那么我们可能丢掉答案（因为我们是从context中抽取出一个小片段作为答案）。 *把超长的输入切片为多个较短的输入，每个输入都要满足模型最大长度输入要求。由于答案可能存在与切片的地方，因此我们需要允许相邻切片之间有交集，代码中通过`doc_stride`参数控制。*

机器问答预训练模型通常将question和context拼接之后作为输入，然后让模型从context里寻找答案。

```
max_length = 384 # 输入feature的最大长度，question和context拼接之后
doc_stride = 128 # 2个切片之间的重合token数量。
```

1. 由于context是拼接在question后面的，对应着第2个文本，所以使用`only_second`控制.tokenizer使用`doc_stride`控制切片之间的重合长度。
2. 由于我们对超长文本进行了切片，我们需要重新寻找答案所在位置（相对于每一片context开头的相对位置）。机器问答模型将使用答案的位置（答案的起始位置和结束位置，start和end）作为训练标签（而不是答案的token IDS）。所以切片需要和原始输入有一个对应关系，每个token在切片后context的位置和原始超长context里位置的对应关系。在tokenizer里可以使用`return_offsets_mapping`参数得到这个对应关系的map：

```python
tokenized_example = tokenizer(
    example["question"],
    example["context"],
    max_length=max_length,
    truncation="only_second",
    return_overflowing_tokens=True,
    return_offsets_mapping=True,
    stride=doc_stride
)
# 打印切片前后位置下标的对应关系
print(tokenized_example["offset_mapping"][0][:100])
```

```
[(0, 0), (0, 3), (4, 8), (9, 13), (14, 18), (19, 22), (23, 28), (29, 33), (34, 37), (37, 38), (38, 39), (40, 50), (51, 55), (56, 60), (60, 61), (0, 0), (0, 3), (4, 7), (7, 8), (8, 9), (10, 20), (21, 25), (26, 29), (30, 34), (35, 36), (36, 37), (37, 40), (41, 45), (45, 46), (47, 50), (51, 53), (54, 58), (59, 61), (62, 69), (70, 73), (74, 78), (79, 86), (87, 91), (92, 96), (96, 97), (98, 101), (102, 106), (107, 115), (116, 118), (119, 121), (122, 126), (127, 138), (138, 139), (140, 146), (147, 153), (154, 160), (161, 165), (166, 171), (172, 175), (176, 182), (183, 186), (187, 191), (192, 198), (199, 205), (206, 208), (209, 210), (211, 217), (218, 222), (223, 225), (226, 229), (230, 240), (241, 245), (246, 248), (248, 249), (250, 258), (259, 262), (263, 267), (268, 271), (272, 277), (278, 281), (282, 285), (286, 290), (291, 301), (301, 302), (303, 307), (308, 312), (313, 318), (319, 321), (322, 325), (326, 330), (330, 331), (332, 340), (341, 351), (352, 354), (355, 363), (364, 373), (374, 379), (379, 380), (381, 384), (385, 389), (390, 393), (394, 406), (407, 408), (409, 415), (416, 418)]
```

1. 我们还需要使用`sequence_ids`参数来区分question和context。

```python
sequence_ids = tokenized_example.sequence_ids()
```

1. 注意，有时候question拼接context，而有时候是context拼接question，不同的模型有不同的要求，因此我们需要使用`padding_side`参数来指定。

```
pad_on_right = tokenizer.padding_side == "right" #context在右边
```

合并

```python
def prepare_train_features(examples):
    # 既要对examples进行truncation（截断）和padding（补全）还要还要保留所有信息，所以要用的切片的方法。
    # 每一个一个超长文本example会被切片成多个输入，相邻两个输入之间会有交集。
    tokenized_examples = tokenizer(
        examples["question" if pad_on_right else "context"],
        examples["context" if pad_on_right else "question"],
        truncation="only_second" if pad_on_right else "only_first",
        max_length=max_length,
        stride=doc_stride,
        return_overflowing_tokens=True,
        return_offsets_mapping=True,
        padding="max_length",
    )

    # 我们使用overflow_to_sample_mapping参数来映射切片片ID到原始ID。
    # 比如有2个expamples被切成4片，那么对应是[0, 0, 1, 1]，前两片对应原来的第一个example。
    sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
    # offset_mapping也对应4片
    # offset_mapping参数帮助我们映射到原始输入，由于答案标注在原始输入上，所以有助于我们找到答案的起始和结束位置。
    offset_mapping = tokenized_examples.pop("offset_mapping")

    # 重新标注数据
    tokenized_examples["start_positions"] = []
    tokenized_examples["end_positions"] = []

    for i, offsets in enumerate(offset_mapping):
        # 对每一片进行处理
        # 将无答案的样本标注到CLS上
        input_ids = tokenized_examples["input_ids"][i]
        cls_index = input_ids.index(tokenizer.cls_token_id)

        # 区分question和context
        sequence_ids = tokenized_examples.sequence_ids(i)

        # 拿到原始的example 下标.
        sample_index = sample_mapping[i]
        answers = examples["answers"][sample_index]
        # 如果没有答案，则使用CLS所在的位置为答案.
        if len(answers["answer_start"]) == 0:
            tokenized_examples["start_positions"].append(cls_index)
            tokenized_examples["end_positions"].append(cls_index)
        else:
            # 答案的character级别Start/end位置.
            start_char = answers["answer_start"][0]
            end_char = start_char + len(answers["text"][0])

            # 找到token级别的index start.
            token_start_index = 0
            while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
                token_start_index += 1

            # 找到token级别的index end.
            token_end_index = len(input_ids) - 1
            while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
                token_end_index -= 1

            # 检测答案是否超出文本长度，超出的话也适用CLS index作为标注.
            if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
                tokenized_examples["start_positions"].append(cls_index)
                tokenized_examples["end_positions"].append(cls_index)
            else:
                # 如果不超出则找到答案token的start和end位置。.
                # Note: we could go after the last offset if the answer is the last word (edge case).
                while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
                    token_start_index += 1
                tokenized_examples["start_positions"].append(token_start_index - 1)
                while offsets[token_end_index][1] >= end_char:
                    token_end_index -= 1
                tokenized_examples["end_positions"].append(token_end_index + 1)

    return tokenized_examples
```

## Fine-tuning微调模型

由于我们要做的是机器问答任务，于是我们使用这个类`AutoModelForQuestionAnswering`

```python
from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer

model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)
```

会提示

> Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForQuestionAnswering: \['vocab\_projector.bias', 'vocab\_transform.bias', 'vocab\_transform.weight', 'vocab\_layer\_norm.weight', 'vocab\_projector.weight', 'vocab\_layer\_norm.bias'] - This IS expected if you are initializing DistilBertForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForQuestionAnswering were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: \['qa\_outputs.weight', 'qa\_outputs.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

由于我们微调的任务是机器问答任务，而我们加载的是预训练的语言模型，那么上面会提示我们加载模型的时候扔掉了一些不匹配的神经网络参数（预训练语言模型的神经网络head被扔掉了，同时随机初始化了机器问答的神经网络head）。

正因为有这些随机初始化的参数，所以我们要在新的数据集上重新fine-tune我们的模型。

1. args

```python
args = TrainingArguments(
    f"test-squad",
    evaluation_strategy = "epoch", # 每个epcoh会做一次验证评估。
    learning_rate=2e-5, #学习率
    per_device_train_batch_size=batch_size,
    per_device_eval_batch_size=batch_size,
    num_train_epochs=3, # 训练的论次
    weight_decay=0.01,
)
```

1. default\_data\_collator将预处理好的数据喂给模型。

```python
from transformers import default_data_collator

data_collator = default_data_collator
```

1. 把模型，训练参数，数据，之前使用的tokenizer，和数据投递工具default\_data\_collator传入Tranier即可。

```python
trainer = Trainer(
    model,
    args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["validation"],
    data_collator=data_collator,
    tokenizer=tokenizer,
)
```

```
trainer.train()
```

```
trainer.save_model("test-squad-trained")
```

## Evaluation评估

我们需要将模型的输出后处理成我们需要的文本格式。

模型评估会稍微优点复杂，我们需要将模型的输出后处理成我们需要的文本格式。模型本身预测的是answer所在start/end位置的logits。如果我们评估时喂入模型的是一个batch，那么输出如下：

```
import torch

for batch in trainer.get_eval_dataloader():
    break
batch = {k: v.to(trainer.args.device) for k, v in batch.items()}
with torch.no_grad():
    output = trainer.model(**batch)
output.keys()
```

模型的输出是一个像dict的数据结构，包含了loss（因为提供了label，所有有loss），answer start和end的logits。我们在输出预测结果的时候不需要看loss，直接看logits就好了。

```
output.start_logits.shape, output.end_logits.shape
```

```
(torch.Size([16, 384]), torch.Size([16, 384]))
```

每个feature里的每个token都会有一个logit。预测answer最简单的方法就是选择start的logits里最大的下标最为answer其实位置，end的logits里最大下标作为answer的结束位置。

```
output.start_logits.argmax(dim=-1), output.end_logits.argmax(dim=-1)
```

```
(tensor([ 46,  57,  78,  43, 118,  15,  72,  35,  15,  34,  73,  41,  80,  91,
         156,  35], device='cuda:0'),
 tensor([ 47,  58,  81,  55, 118, 110,  75,  37, 110,  36,  76,  53,  83,  94,
         158,  35], device='cuda:0'))
```

以上策略大部分情况下都是不错的。但是，如果我们的输入告诉我们找不到答案：比如start的位置比end的位置下标大，或者start和end的位置指向了question。

这个时候，简单的方法是我们继续需要选择第2好的预测作为我们的答案了，实在不行看第3好的预测，以此类推。

由于上面的方法不太容易找到可行的答案，我们需要思考更合理的方法。我们将start和end的logits相加得到新的打分，然后去看最好的`n_best_size`个start和end对。从`n_best_size`个start和end对里推出相应的答案，然后检查答案是否有效，最后将他们按照打分进行怕苦，选择得分最高的作为答案。

```
n_best_size = 20
```

```
import numpy as np

start_logits = output.start_logits[0].cpu().numpy()
end_logits = output.end_logits[0].cpu().numpy()
# 收集最佳的start和end logits的位置:
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
valid_answers = []
for start_index in start_indexes:
    for end_index in end_indexes:
        if start_index <= end_index: # 如果start小雨end，那么合理的
            valid_answers.append(
                {
                    "score": start_logits[start_index] + end_logits[end_index],
                    "text": "" # 后续需要根据token的下标将答案找出来
                }
            )
```

随后我们对根据`score`对`valid_answers`进行排序，找到最好的那一个。最后还剩一步是：检查start和end位置对应的文本是否在context里面而不是在question里面。

为了完成这件事情，我们需要添加以下两个信息到validation的features里面：

* 产生feature的example的ID。由于每个example可能会产生多个feature，所以每个feature/切片的feature需要知道他们对应的example。
* offset mapping： 将每个切片的tokens的位置映射会原始文本基于character的下标位置。

所以我们又重新处理了以下validation验证集。和处理训练的时候的`prepare_train_features`稍有不同。

```
def prepare_validation_features(examples):
    # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
    # in one example possible giving several features when a context is long, each of those features having a
    # context that overlaps a bit the context of the previous feature.
    tokenized_examples = tokenizer(
        examples["question" if pad_on_right else "context"],
        examples["context" if pad_on_right else "question"],
        truncation="only_second" if pad_on_right else "only_first",
        max_length=max_length,
        stride=doc_stride,
        return_overflowing_tokens=True,
        return_offsets_mapping=True,
        padding="max_length",
    )

    # Since one example might give us several features if it has a long context, we need a map from a feature to
    # its corresponding example. This key gives us just that.
    sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")

    # We keep the example_id that gave us this feature and we will store the offset mappings.
    tokenized_examples["example_id"] = []

    for i in range(len(tokenized_examples["input_ids"])):
        # Grab the sequence corresponding to that example (to know what is the context and what is the question).
        sequence_ids = tokenized_examples.sequence_ids(i)
        context_index = 1 if pad_on_right else 0

        # One example can give several spans, this is the index of the example containing this span of text.
        sample_index = sample_mapping[i]
        tokenized_examples["example_id"].append(examples["id"][sample_index])

        # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
        # position is part of the context or not.
        tokenized_examples["offset_mapping"][i] = [
            (o if sequence_ids[k] == context_index else None)
            for k, o in enumerate(tokenized_examples["offset_mapping"][i])
        ]

    return tokenized_examples
```

和之前一样将`prepare_validation_features`函数应用到每个验证集合的样本上。

```
validation_features = datasets["validation"].map(
    prepare_validation_features,
    batched=True,
    remove_columns=datasets["validation"].column_names
)
HBox(children=(FloatProgress(value=0.0, max=11.0), HTML(value='')))
```

使用`Trainer.predict`方法获得所有预测结果

```
raw_predictions = trainer.predict(validation_features)
```

这个 `Trainer` *隐藏了* 一些模型训练时候没有使用的属性(这里是 `example_id`和`offset_mapping`，后处理的时候会用到),所以我们需要把这些设置回来:

```
validation_features.set_format(type=validation_features.format["type"], columns=list(validation_features.features.keys()))
```

当一个token位置对应着question部分时候，`prepare_validation_features`函数将offset mappings设定为`None`，所以我们根据offset mapping很容易可以鉴定token是否在context里面啦。我们同样也根绝扔掉了特别长的答案。

```
max_answer_length = 30
```

```
start_logits = output.start_logits[0].cpu().numpy()
end_logits = output.end_logits[0].cpu().numpy()
offset_mapping = validation_features[0]["offset_mapping"]
# The first feature comes from the first example. For the more general case, we will need to be match the example_id to
# an example index
context = datasets["validation"][0]["context"]

# Gather the indices the best start/end logits:
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
valid_answers = []
for start_index in start_indexes:
    for end_index in end_indexes:
        # Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
        # to part of the input_ids that are not in the context.
        if (
            start_index >= len(offset_mapping)
            or end_index >= len(offset_mapping)
            or offset_mapping[start_index] is None
            or offset_mapping[end_index] is None
        ):
            continue
        # Don't consider answers with a length that is either < 0 or > max_answer_length.
        if end_index < start_index or end_index - start_index + 1 > max_answer_length:
            continue
        if start_index <= end_index: # We need to refine that test to check the answer is inside the context
            start_char = offset_mapping[start_index][0]
            end_char = offset_mapping[end_index][1]
            valid_answers.append(
                {
                    "score": start_logits[start_index] + end_logits[end_index],
                    "text": context[start_char: end_char]
                }
            )

valid_answers = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[:n_best_size]
valid_answers
```

```
[{'score': 16.706663, 'text': 'Denver Broncos'},
 {'score': 14.635585,
  'text': 'Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers'},
 {'score': 13.234194, 'text': 'Carolina Panthers'},
 {'score': 12.468662, 'text': 'Broncos'},
 {'score': 11.709289, 'text': 'Denver'},
 {'score': 10.397583,
  'text': 'Broncos defeated the National Football Conference (NFC) champion Carolina Panthers'},
 {'score': 10.104669,
  'text': 'American Football Conference (AFC) champion Denver Broncos'},
 {'score': 9.721636,
  'text': 'The American Football Conference (AFC) champion Denver Broncos'},
 {'score': 9.007437,
  'text': 'Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10'},
 {'score': 8.834958,
  'text': 'Denver Broncos defeated the National Football Conference (NFC) champion Carolina'},
 {'score': 8.38701,
  'text': 'Denver Broncos defeated the National Football Conference (NFC)'},
 {'score': 8.143825,
  'text': 'Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title.'},
 {'score': 8.03359,
  'text': 'American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers'},
 {'score': 7.832466,
  'text': 'Denver Broncos defeated the National Football Conference (NFC'},
 {'score': 7.650557,
  'text': 'The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers'},
 {'score': 7.6060467, 'text': 'Carolina Panthers 24–10'},
 {'score': 7.5795317,
  'text': 'Denver Broncos defeated the National Football Conference'},
 {'score': 7.433568, 'text': 'Carolina'},
 {'score': 6.742434,
  'text': 'Carolina Panthers 24–10 to earn their third Super Bowl title.'},
 {'score': 6.71136,
  'text': 'Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24'}]
```

```
将预测答案和真实答案进行比较即可:
```

```
datasets["validation"][0]["answers"]
```

```
{'answer_start': [177, 177, 177],
 'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos']}
```

可以看到模型做对了！

如同上面的例子所言，由于第1个feature一定是来自于第1个example，所以相对容易。对于其他的fearures来说，我们需要一个features和examples的一个映射map。同样，由于一个example可能被切片成多个features，所以我们也需要将所有features里的答案全部手机起来。以下的代码就将exmaple的下标和features的下标进行map映射。

```
import collections

examples = datasets["validation"]
features = validation_features

example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
    features_per_example[example_id_to_index[feature["example_id"]]].append(i)
```

对于后处理过程基本上已经全部完成了。最后一点事情是如何解决无答案的情况（squad\_v2=True的时候）。以上的代码都只考虑了context里面的asnwers，所以我们同样需要将无答案的预测得分进行搜集（无答案的预测对应的CLSt oken的start和end）。如果一个example样本又多个features，那么我们还需要在多个features里预测是不是都无答案。所以无答案的最终得分是所有features的无答案得分最小的那个。

只要无答案的最终得分高于其他所有答案的得分，那么该问题就是无答案。

把所有事情都合并起来：

```
from tqdm.auto import tqdm

def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30):
    all_start_logits, all_end_logits = raw_predictions
    # Build a map example to its corresponding features.
    example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
    features_per_example = collections.defaultdict(list)
    for i, feature in enumerate(features):
        features_per_example[example_id_to_index[feature["example_id"]]].append(i)

    # The dictionaries we have to fill.
    predictions = collections.OrderedDict()

    # Logging.
    print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")

    # Let's loop over all the examples!
    for example_index, example in enumerate(tqdm(examples)):
        # Those are the indices of the features associated to the current example.
        feature_indices = features_per_example[example_index]

        min_null_score = None # Only used if squad_v2 is True.
        valid_answers = []

        context = example["context"]
        # Looping through all the features associated to the current example.
        for feature_index in feature_indices:
            # We grab the predictions of the model for this feature.
            start_logits = all_start_logits[feature_index]
            end_logits = all_end_logits[feature_index]
            # This is what will allow us to map some the positions in our logits to span of texts in the original
            # context.
            offset_mapping = features[feature_index]["offset_mapping"]

            # Update minimum null prediction.
            cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id)
            feature_null_score = start_logits[cls_index] + end_logits[cls_index]
            if min_null_score is None or min_null_score < feature_null_score:
                min_null_score = feature_null_score

            # Go through all possibilities for the `n_best_size` greater start and end logits.
            start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
            end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
            for start_index in start_indexes:
                for end_index in end_indexes:
                    # Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
                    # to part of the input_ids that are not in the context.
                    if (
                        start_index >= len(offset_mapping)
                        or end_index >= len(offset_mapping)
                        or offset_mapping[start_index] is None
                        or offset_mapping[end_index] is None
                    ):
                        continue
                    # Don't consider answers with a length that is either < 0 or > max_answer_length.
                    if end_index < start_index or end_index - start_index + 1 > max_answer_length:
                        continue

                    start_char = offset_mapping[start_index][0]
                    end_char = offset_mapping[end_index][1]
                    valid_answers.append(
                        {
                            "score": start_logits[start_index] + end_logits[end_index],
                            "text": context[start_char: end_char]
                        }
                    )

        if len(valid_answers) > 0:
            best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0]
        else:
            # In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
            # failure.
            best_answer = {"text": "", "score": 0.0}

        # Let's pick our final answer: the best one or the null answer (only for squad_v2)
        if not squad_v2:
            predictions[example["id"]] = best_answer["text"]
        else:
            answer = best_answer["text"] if best_answer["score"] > min_null_score else ""
            predictions[example["id"]] = answer

    return predictions
```

将后处理函数应用到原始预测上：

```
final_predictions = postprocess_qa_predictions(datasets["validation"], validation_features, raw_predictions.predictions)
Post-processing 10570 example predictions split into 10784 features.
HBox(children=(FloatProgress(value=0.0, max=10570.0), HTML(value='')))
```

然后我们加载评测指标：

```
metric = load_metric("squad_v2" if squad_v2 else "squad")
```

同理，也可以使用我们提供的本地脚本来加载：

```
metric_path = './dataset/metrics/squad.py'
metric = load_metric(metric_path)
```

然后我们基于预测和标注对评测指标进行计算。为了合理的比较，我们需要将预测和标注的格式。对于squad2来说，评测指标还需要`no_answer_probability`参数（由于已经无答案直接设置成了空字符串，所以这里直接将这个参数设置为0.0）

```
if squad_v2:
    formatted_predictions = [{"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in predictions.items()]
else:
    formatted_predictions = [{"id": k, "prediction_text": v} for k, v in final_predictions.items()]
references = [{"id": ex["id"], "answers": ex["answers"]} for ex in datasets["validation"]]
metric.compute(predictions=formatted_predictions, references=references)
```

```
{'exact_match': 76.74550614947965, 'f1': 85.13412652023338}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://jon-xia.gitbook.io/workspace/ji-qi-xue-xi-25/datawhale/transformer/t8.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
