LLaMA-Ecommerce

This is a merged model combining NousResearch/Llama-2-7b-hf with DSMI/LLaMA-E LoRA adapter.

Model Description

LLaMA-E is specialized for e-commerce content generation tasks including:

  • Product descriptions
  • Advertisements
  • Product titles
  • E-commerce Q&A
  • Purchase intent analysis

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("askcatalystai/llama-ecommerce")
tokenizer = AutoTokenizer.from_pretrained("askcatalystai/llama-ecommerce")

prompt = "***Instruction: Write a product description\n***Input: Blue cotton t-shirt, comfortable fit\n***Response:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))

Original Work

Based on research: LLaMA-E: Empowering E-commerce Authoring with Multi-Aspect Instruction Following

License

This model is subject to the Llama 2 Community License.

Downloads last month
36
Safetensors
Model size
7B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for askcatalystai/llama-ecommerce

Finetuned
(64)
this model

Paper for askcatalystai/llama-ecommerce