Llama Pricer
A fine-tuned Llama 3.1 model specialized for Amazon product price prediction.
Model Performance
This model achieves state-of-the-art performance on Amazon product price prediction with a Mean Absolute Error (MAE) of $47, significantly outperforming all baseline methods:
| Method | MAE |
|---|---|
| Fine-tuned Llama 3.1 | $47 |
| GPT-4o Mini | $76 |
| Random Forest + WordVec | $97 |
| Linear Regression + WordVec | $121 |
| Average Price Prediction | $146 |
| Random Guess | $350 |
The model shows a 38% improvement over GPT-4o Mini and 51% improvement over traditional ML approaches.
Model Details
- Base Model: Llama 3.1
- Fine-tuning: Specialized for Amazon product price prediction
- Performance: MAE of $47 (38% better than GPT-4o Mini)
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the model
tokenizer = AutoTokenizer.from_pretrained("SHAH-MEER/llama-pricer")
model = AutoModelForCausalLM.from_pretrained("SHAH-MEER/llama-pricer")
# Example usage
inputs = tokenizer("Predict the Amazon price for this product:", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0]))
Applications
- Amazon product price prediction
- E-commerce price forecasting
- Competitive pricing analysis
- Product pricing optimization
- Market trend analysis for Amazon products
License
Please refer to the original Llama 3.1 license terms.
Citation
If you use this model in your research, please cite:
@misc{llama-pricer,
title={Llama Pricer: Fine-tuned Llama 3.1 for Amazon Product Price Prediction},
author={SHAH-MEER},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/SHAH-MEER/llama-pricer}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for SHAH-MEER/llama-pricer
Base model
meta-llama/Llama-3.1-8B