To be used with texify. Set MODEL_CHECKPOINT=vikp/texify2
Note that this is a testing checkpoint that most people won't want to use. The correct checkpoint is vikp/texify. I'm leaving this up since I know it is used in a few places.
- Downloads last month
- 117
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "vikp/texify2"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vikp/texify2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'