YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Cozmo-EMRO: Emotion Model of Robot Observation

Cozmo-EMRO is a RoBERTa-based transformer model fine-tuned specifically for classifying emotions from behavioral descriptions generated by the Cozmo robot. The model classifies textual descriptions of robot behaviors into one of six emotion categories.

Model Details

Base Model: RoBERTa-base Task: Emotion Classification (Text Classification) Inputs: A textual description of Cozmo robot behaviors Outputs: One of six grouped emotion labels Number of Emotion Classes: 6

Development code: https://github.com/bsu-slim/emro-gred-cozmo

Emotion Labels

The model classifies each input behavior into one of the following six grouped emotion categories:

  • 0: anger_frustration
  • 1: confusion_sorrow_boredom
  • 2: disgust_surprise_alarm_fear
  • 3: interest_desire
  • 4: joy_hope
  • 5: understanding_gratitude_relief

Files in this repo

  • pytorch_model.bin โ€“ Fine-tuned model weights
  • config.json โ€“ Model architecture and label mapping
  • vocab.json, merges.txt, tokenizer_config.json, special_tokens_map.json โ€“ Tokenizer files
  • cozmo_emro.py โ€“ Custom model class definition (RobertaClass)

How to Use

You need to use the provided custom model class RobertaClass defined in cozmo_emro.py to load the model:

import torch
from transformers import RobertaTokenizer, RobertaConfig
from cozmo_emro import RobertaClass

model_name = "bsu-slim/emro-cozmo"  

# Load tokenizer and config
tokenizer = RobertaTokenizer.from_pretrained(model_name)
config = RobertaConfig.from_pretrained(model_name)

# Load model
model = RobertaClass(num_classes=config.num_labels)
model.load_state_dict(torch.load("pytorch_model.bin"))
Downloads last month
2
Safetensors
Model size
0.1B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support