LLaMA-E: Empowering E-commerce Authoring with Multi-Aspect Instruction Following
Paper
•
2308.04913
•
Published
This is a merged model combining NousResearch/Llama-2-7b-hf with DSMI/LLaMA-E LoRA adapter.
LLaMA-E is specialized for e-commerce content generation tasks including:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("askcatalystai/llama-ecommerce")
tokenizer = AutoTokenizer.from_pretrained("askcatalystai/llama-ecommerce")
prompt = "***Instruction: Write a product description\n***Input: Blue cotton t-shirt, comfortable fit\n***Response:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
Based on research: LLaMA-E: Empowering E-commerce Authoring with Multi-Aspect Instruction Following
This model is subject to the Llama 2 Community License.
Base model
NousResearch/Llama-2-7b-hf