Commit
·
5df633a
1
Parent(s):
e92d4e6
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,5 @@
|
|
|
|
|
|
|
|
| 1 |
# LLAMA 7B Sentiment Analysis Adapter
|
| 2 |
|
| 3 |
Explore the capabilities of sentiment analysis with our LLAMA 7B Sentiment Analysis Adapter. This repository showcases the application of the LORA (Low-Rank Adaptation) technique and the Peft library to enhance the sentiment analysis capabilities of the existing LLAMA 7B model.
|
|
@@ -13,3 +15,77 @@ The adapter has been trained using the Amazon Sentiment Review dataset, which in
|
|
| 13 |
## Dataset Overview
|
| 14 |
|
| 15 |
The Amazon Sentiment Review dataset was chosen for its size and its realistic representation of customer feedback. It serves as an excellent basis for training models to perform sentiment analysis in real-world scenarios.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
|
| 3 |
# LLAMA 7B Sentiment Analysis Adapter
|
| 4 |
|
| 5 |
Explore the capabilities of sentiment analysis with our LLAMA 7B Sentiment Analysis Adapter. This repository showcases the application of the LORA (Low-Rank Adaptation) technique and the Peft library to enhance the sentiment analysis capabilities of the existing LLAMA 7B model.
|
|
|
|
| 15 |
## Dataset Overview
|
| 16 |
|
| 17 |
The Amazon Sentiment Review dataset was chosen for its size and its realistic representation of customer feedback. It serves as an excellent basis for training models to perform sentiment analysis in real-world scenarios.
|
| 18 |
+
|
| 19 |
+
model-index:
|
| 20 |
+
- name: LLAMA 7B Sentiment Analysis Adapter
|
| 21 |
+
results:
|
| 22 |
+
- task:
|
| 23 |
+
name: Sentiment Analysis
|
| 24 |
+
type: text-classification
|
| 25 |
+
dataset:
|
| 26 |
+
name: Amazon Sentiment Review dataset
|
| 27 |
+
type: amazon_reviews
|
| 28 |
+
model-metadata:
|
| 29 |
+
license: apache-2.0
|
| 30 |
+
library_name: transformers
|
| 31 |
+
tags: ["text-classification", "sentiment-analysis", "English"]
|
| 32 |
+
languages: ["en"]
|
| 33 |
+
widget:
|
| 34 |
+
- text: "I love using FuturixAI for my daily tasks!"
|
| 35 |
+
|
| 36 |
+
intended-use:
|
| 37 |
+
primary-uses:
|
| 38 |
+
- This model is intended for sentiment analysis on English language text.
|
| 39 |
+
primary-users:
|
| 40 |
+
- Researchers
|
| 41 |
+
- Social media monitoring tools
|
| 42 |
+
- Customer feedback analysis systems
|
| 43 |
+
|
| 44 |
+
training-data:
|
| 45 |
+
training-data-source: Amazon Sentiment Review dataset
|
| 46 |
+
|
| 47 |
+
quantitative-analyses:
|
| 48 |
+
use-cases-limitations:
|
| 49 |
+
- The model may perform poorly on texts that contain a lot of slang or are in a different language than it was trained on.
|
| 50 |
+
|
| 51 |
+
ethical-considerations:
|
| 52 |
+
risks-and-mitigations:
|
| 53 |
+
- There is a risk of the model reinforcing or creating biases based on the training data. Users should be aware of this and consider additional bias mitigation strategies when using the model.
|
| 54 |
+
|
| 55 |
+
model-architecture:
|
| 56 |
+
architecture: LLAMA 7B with LORA adaptation
|
| 57 |
+
library: PeftModel
|
| 58 |
+
|
| 59 |
+
how-to-use:
|
| 60 |
+
installation:
|
| 61 |
+
- pip install transformers peft
|
| 62 |
+
code-examples:
|
| 63 |
+
- |
|
| 64 |
+
```python
|
| 65 |
+
import transformers
|
| 66 |
+
from peft import PeftModel
|
| 67 |
+
model_name = "meta-llama/Llama-2-7b" # you can use VICUNA 7B model as well
|
| 68 |
+
peft_model_id = "Futurix-AI/LLAMA_7B_Sentiment_Analysis_Amazon_Review_Dataset"
|
| 69 |
+
|
| 70 |
+
tokenizer_t5 = transformers.AutoTokenizer.from_pretrained(model_name)
|
| 71 |
+
model_t5 = transformers.AutoModelForCausalLM.from_pretrained(model_name)
|
| 72 |
+
model_t5 = PeftModel.from_pretrained(model_t5, peft_model_id)
|
| 73 |
+
|
| 74 |
+
prompt = """
|
| 75 |
+
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
| 76 |
+
###Instruction:
|
| 77 |
+
Detect the sentiment of the tweet.
|
| 78 |
+
###Input:
|
| 79 |
+
FuturixAI embodies the spirit of innovation, with a resolve to push the boundaries of what's possible through science and technology.
|
| 80 |
+
###Response:
|
| 81 |
+
"""
|
| 82 |
+
|
| 83 |
+
inputs = tokenizer_t5(prompt, return_tensors="pt")
|
| 84 |
+
for k, v in inputs.items():
|
| 85 |
+
inputs[k] = v
|
| 86 |
+
outputs = model_t5.generate(**inputs, max_length=256, do_sample=True)
|
| 87 |
+
text = tokenizer_t5.batch_decode(outputs, skip_special_tokens=True)[0]
|
| 88 |
+
print(text)
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
|