Instructions to use eduardem/parrot_en_es with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use eduardem/parrot_en_es with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="eduardem/parrot_en_es")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("eduardem/parrot_en_es") model = AutoModelForCausalLM.from_pretrained("eduardem/parrot_en_es") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use eduardem/parrot_en_es with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "eduardem/parrot_en_es" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "eduardem/parrot_en_es", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/eduardem/parrot_en_es
- SGLang
How to use eduardem/parrot_en_es with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "eduardem/parrot_en_es" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "eduardem/parrot_en_es", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "eduardem/parrot_en_es" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "eduardem/parrot_en_es", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use eduardem/parrot_en_es with Docker Model Runner:
docker model run hf.co/eduardem/parrot_en_es
parrot_en_es 13B English to Spanish In-Context Translation Model
Welcome to the alpha release of parrot_en_es English to Spanish In-Context Translation Model, fine-tuned from LLaMa-2-13B with 250,000 real examples. This model is uniquely designed to offer translations in specific contexts, ensuring the accuracy and relevancy of the translation.
Fine tune:
Was done using this code: https://github.com/iongpt/qlora-llama2-orca/blob/main/train.py with a dataset of 250 000 rows using one A100 GPU for 20 hours.
Features:
- Context-aware translations.
- Preservation of original formatting including line breaks, HTML, XML, and more.
- Suitable for various contexts, from mobile applications to anything else.
Usage:
Prompt Template:
To use this model effectively, adhere to the following prompt template. Only modify the text inside the quotation marks:
[INST] <<SYS>>
As a professional translator with expertise in English and Spanish, you are tasked with translating individual strings from English to Spanish. The translation must be provided in the context specified in the user prompt, such as a specific category or theme. It is imperative that you preserve all original formatting, including line breaks, HTML, XML, and any other formatting present in the source text. Your response must reflect the translated text while maintaining the integrity of the original format. Ensure that you do not add any formatting elements that were not present in the original text, and do not remove any formatting elements that are present in the original.
<</SYS>>
Translate the phrase "[Your English Text Here]" from English to Spanish in the context of a "[Your Context Here]"[/INST]
Examples:
For translating "I am sleeping" in different contexts:
Context 1: Mobile iOS Application Prompt:
[INST] <<SYS>>
As a professional translator with expertise in English and Spanish, you are tasked with translating individual strings from English to Spanish. The translation must be provided in the context specified in the user prompt, such as a specific category or theme. It is imperative that you preserve all original formatting, including line breaks, HTML, XML, and any other formatting present in the source text. Your response must reflect the translated text while maintaining the integrity of the original format. Ensure that you do not add any formatting elements that were not present in the original text, and do not remove any formatting elements that are present in the original.
<</SYS>>
Translate the phrase "tap" from English to Spanish in the context of a "Mobile iOS Application"[/INST]
Response:
[RSP] tocar [/RSP]
...other garbage to be ignored...
Context 2: Dance Guide Prompt:
[INST] <<SYS>>
As a professional translator with expertise in English and Spanish, you are tasked with translating individual strings from English to Spanish. The translation must be provided in the context specified in the user prompt, such as a specific category or theme. It is imperative that you preserve all original formatting, including line breaks, HTML, XML, and any other formatting present in the source text. Your response must reflect the translated text while maintaining the integrity of the original format. Ensure that you do not add any formatting elements that were not present in the original text, and do not remove any formatting elements that are present in the original.
<</SYS>>
Translate the phrase "tap" from English to Spanish in the context of a "Dance Guide"[/INST]
Response:
[RSP] zapateo [/RSP]
...other garbage to be ignored...
Important Notes:
- The model currently responds with the desired translation in the first block of the output. Any additional text after the initial block is considered extraneous and should be ignored.
- This is the first alpha release. We are actively working on improvements and further fine-tuning to enhance the model's capabilities.
Feedback:
Your feedback is invaluable to us! If you encounter any issues or have suggestions for improvement, please reach out via the Community board.
- Downloads last month
- 3