L3Cube-HingCorpus and HingBERT: A Code Mixed Hindi-English Dataset and BERT Language Models
Paper • 2204.08398 • Published • 1
How to use l3cube-pune/hing-gpt with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="l3cube-pune/hing-gpt") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("l3cube-pune/hing-gpt")
model = AutoModelForCausalLM.from_pretrained("l3cube-pune/hing-gpt")How to use l3cube-pune/hing-gpt with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "l3cube-pune/hing-gpt"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "l3cube-pune/hing-gpt",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/l3cube-pune/hing-gpt
How to use l3cube-pune/hing-gpt with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "l3cube-pune/hing-gpt" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "l3cube-pune/hing-gpt",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "l3cube-pune/hing-gpt" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "l3cube-pune/hing-gpt",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use l3cube-pune/hing-gpt with Docker Model Runner:
docker model run hf.co/l3cube-pune/hing-gpt
HingGPT is a Hindi-English code-mixed GPT model trained on roman text. It is a GPT2 model trained on L3Cube-HingCorpus.
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
Other models from HingBERT family:
HingBERT
HingMBERT
HingBERT-Mixed
HingBERT-Mixed-v2
HingRoBERTa
HingRoBERTa-Mixed
HingGPT
HingGPT-Devanagari
HingBERT-LID
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}