--- language: - en metrics: - accuracy --- Fine-Tuned [BLINK](https://github.com/facebookresearch/blink/) CrossEncoder. - Base model: https://huggingface.co/UnlikelyAI/crossencoder-wiki-large - Training data: - 20% (stratified by source dataset) of the following [entity resolution benchmarks](https://console.cloud.google.com/bigquery?ws=!1m4!1m3!3m2!1sunlikelyaiprincipal!2sfixed): - handwritten_entity_linking - wikibank_entity_linking - kilt_entity_linking - jobe_entity_linking - qald9_entity_linking - Training setup: - 1 L4 GPU (23GB) - batch_size = 1 - gradient_accumulation_steps = 8 - type_optimization = "all_encoder_layers" (i.e. ["additional", "bert_model.encoder.layer"]) - n_epochs = 2