How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "TextCortex/product_description_generator"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "TextCortex/product_description_generator",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/TextCortex/product_description_generator
Quick Links

TextCortex AI - Product Description Generator - Electronics Model

This is one of our legacy models that was used for generating product descriptions for Electronic products. Because of the inference times, we trained this model on a very small version of the GPT-NEO with 125M parameters.

Due to its small size, we had to train a model for each product category for our users.
We will be releasing other trained models on other categories soon.

How to Prompt:

Just give your product name and add 'Product Description:' at the end of it to generate product descriptions.
Here is an example prompt:
Product name: USB Dongle for video capture Product Description:

TextCortex API

If you want to generate product descriptions programatically, you can check out our API, hemingwAI at this link: https://textcortex.com/documentation/api

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support