Text Classification
Transformers
Safetensors
Vietnamese
absa_transformer
vietnamese
custom-code
multilingual-e5
absa
vlsp2018
restaurant
aspect-based-sentiment-analysis
custom_code
Instructions to use NeoCyber/m-e5-small-vlsp2018-restaurant with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use NeoCyber/m-e5-small-vlsp2018-restaurant with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="NeoCyber/m-e5-small-vlsp2018-restaurant", trust_remote_code=True)# Load model directly from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("NeoCyber/m-e5-small-vlsp2018-restaurant", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
m-e5-small-vlsp2018-restaurant
Overview
Vietnamese aspect-category sentiment classification model for restaurant reviews from the VLSP 2018 sentiment analysis benchmark.
Model Details
- Base model:
intfloat/multilingual-e5-small - Architecture:
absa - Checkpoint source:
vlsp-2018-restaurant-e5-small-best.pt - Sequence length used during training/inference pipeline:
256 - Number of aspect categories:
12
Label Schema
0: aspect not mentioned1: positive2: negative3: neutral
Aspect Categories
AMBIENCE#GENERALDRINKS#PRICESDRINKS#QUALITYDRINKS#STYLE&OPTIONSFOOD#PRICESFOOD#QUALITYFOOD#STYLE&OPTIONSLOCATION#GENERALRESTAURANT#GENERALRESTAURANT#MISCELLANEOUSRESTAURANT#PRICESSERVICE#GENERAL
Dataset
- Dataset:
VLSP 2018 Restaurant ReviewsThis model is trained on the restaurant subset of the VLSP 2018 aspect-based sentiment analysis benchmark.
Data Format
Reviewis the input text column.- Each aspect-category column is encoded as
0/1/2/3for none, positive, negative, or neutral.
Splits
- Train:
2961samples - Validation:
1290samples - Test:
500samples
Checkpoint Metrics
loss:0.3680accuracy:0.8747
Usage
Load the model with trust_remote_code=True because this repository contains custom modeling code.
from transformers import AutoModelForSequenceClassification, AutoTokenizer
repo_id = "NeoCyber/m-e5-small-vlsp2018-restaurant"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(
repo_id,
trust_remote_code=True,
)
texts = ["Đồ ăn ngon nhưng phục vụ hơi chậm."]
inputs = tokenizer(texts, return_tensors="pt", truncation=True, padding=True)
outputs = model(**inputs)
predictions = model.decode_predictions(outputs.logits)
print(predictions)
Notes
- The repository includes custom
configuration_*.pyandmodeling_*.pyfiles required bytransformersAutoClasses. outputs.logitshas shape[batch_size, num_aspects, 4]andmodel.decode_predictions(...)maps logits back to aspect-level labels.
- Downloads last month
- 27
Model tree for NeoCyber/m-e5-small-vlsp2018-restaurant
Base model
intfloat/multilingual-e5-small