CritiqueCore_v1 / README.md
LH-Tech-AI's picture
Update README.md
e3fdd24 verified
metadata
language: en
license: mit
library_name: transformers
tags:
  - sentiment-analysis
  - classification
  - from-scratch
datasets:
  - imdb
metrics:
  - accuracy
model-index:
  - name: CritiqueCore-v1
    results:
      - task:
          type: text-classification
          name: Sentiment Analysis
        dataset:
          name: imdb
          type: imdb
        metrics:
          - type: accuracy
            value: 0.9
pipeline_tag: text-classification

CritiqueCore v1

CritiqueCore v1 is a compact Transformer model trained from scratch for sentiment analysis. Unlike models that use transfer learning, this model was initialized with random weights and learned the nuances of language (including sarcasm and basic cross-lingual sentiment) exclusively from the IMDb movie reviews dataset.

Model Description

  • Architecture: Custom Mini-Transformer (DistilBERT-based configuration)
  • Parameters: ~9.06 Million
  • Layers: 2
  • Attention Heads: 4
  • Hidden Dimension: 256
  • Training Data: IMDb Movie Reviews (25,000 samples)
  • Training Duration: ~10 minutes on NVIDIA T4 GPU

Capabilities

  • Sentiment Detection: Strong performance on positive/negative English text.
  • Sarcasm Awareness: Recognizes negative intent even when positive words are used (e.g., "CGI vomit").
  • Robustness: Handles minor typos and maintains high confidence on structured feedback.

Limitations

  • Domain Specificity: Optimized for reviews. May struggle with complex multi-turn dialogues.
  • Multilingual: While it shows some intuition for German, it was not explicitly trained on non-English data.

How to use (Inference Script)

First, you have to download CritiqueCore_v1_Model.zip and unpack it. Then, you can use inference.py from this repos' files list. Have fun :D

Examples

Example 1: Standard movie review

Input:

This movie was an absolute masterpiece! The acting was incredible and I loved every second.

Output: POSITIVE (99.03% confidence)

Example 2: Sarcasm

Input:

Oh great, another superhero movie. Just what the world needed. I loved sitting through 3 hours of CGI vomit.

Output: NEGATIVE (93.81% confidence)

Example 3: Negative question

Input:

Why did they even produce it?

Output: NEGATIVE (99.37% confidence)

Training code

The full training code can be found in this repo as train.ipynb.