How to use from
SGLang
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "SRDdev/Paraphrase" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "SRDdev/Paraphrase",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "SRDdev/Paraphrase" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "SRDdev/Paraphrase",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Quick Links

YAML Metadata Warning:The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Paraphraser Model Card

Model Details

  • Model Name: Paraphraser
  • Model ID: SRD/Paraphraser
  • Author: SRD
  • Language: English
  • License: Apache-2.0

Description

The Paraphraser is a sequence-to-sequence model fine-tuned for paraphrasing sentences. It is built upon the T5 (Text-to-Text Transfer Transformer) architecture and aims to generate diverse paraphrases for a given input sentence.

Intended Use

The primary purpose of this model is to assist users in generating paraphrases for input sentences. It can be utilized in various natural language processing tasks, including data augmentation, text generation, and content rewriting.

Limitations and Considerations

  • The quality of paraphrases may vary, and it is recommended to review generated outputs.
  • The model might produce paraphrases that are contextually incorrect or nonsensical.
  • Long sentences or complex language may result in less coherent paraphrases.
  • The model is sensitive to input phrasing, and slight rephrasing may lead to different outputs.

Training Data

The model is trained on a SQUAD dataset composed of diverse sentences from various sources to ensure a broad understanding of language and context.

Downloads last month
26
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SRDdev/Paraphrase

Quantizations
1 model