BERT_LearningMobility

This repository contains a BERT encoder saved after eye-tracking fine-tuning in the VDA_ET workflow.

The temporary token-level regression head used during training is not included. Load the checkpoint with AutoModel.from_pretrained for downstream encoder analysis or continued fine-tuning.

from transformers import AutoModel, AutoTokenizer

model_id = "calogero-jerik-scozzaro/BERT_LearningMobility"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoModel.from_pretrained(model_id)

Training metadata

Field Value
all_train_texts LearningMobility
batch_size 8
epochs 100
learning_rate 2e-05
max_length 256
measures FFD, FPRT, TFT, RRT, skipped, FPF, RR
num_train_sentences 10
source_model dbmdz/bert-base-italian-uncased
stage 1
stage_train_texts LearningMobility
test_texts HumanRights
variant BERT_LearningMobility

The uploaded files include et_label_scaler.json, which records the min-max scaling statistics used for the eye-tracking labels.

Downloads last month
25
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for calogero-jerik-scozzaro/BERT_LearningMobility

Finetuned
(35)
this model