Instructions to use kumo24/llama2-sentiment-nuclear with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use kumo24/llama2-sentiment-nuclear with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="kumo24/llama2-sentiment-nuclear")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("kumo24/llama2-sentiment-nuclear") model = AutoModelForSequenceClassification.from_pretrained("kumo24/llama2-sentiment-nuclear") - Notebooks
- Google Colab
- Kaggle
This LLaMA-2 7B (meta-llama/Llama-2-7b-hf) was fined-tuned on nuclear energy data from twitter/X. The classification accuracy obtained is 96%.
You need access to use the LLaMA-2 model files. To use it, get an access from meta-llama/Llama-2-7b-hf.
The number of labels is 3: {0: Negative, 1: Neutral, 2: Positive}
Warning: You need sufficient GPU to run this model.
- Downloads last month
- 6