Instructions to use convergence-ai/proxy-lite-3b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use convergence-ai/proxy-lite-3b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="convergence-ai/proxy-lite-3b") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("convergence-ai/proxy-lite-3b") model = AutoModelForImageTextToText.from_pretrained("convergence-ai/proxy-lite-3b") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use convergence-ai/proxy-lite-3b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "convergence-ai/proxy-lite-3b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "convergence-ai/proxy-lite-3b", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/convergence-ai/proxy-lite-3b
- SGLang
How to use convergence-ai/proxy-lite-3b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "convergence-ai/proxy-lite-3b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "convergence-ai/proxy-lite-3b", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "convergence-ai/proxy-lite-3b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "convergence-ai/proxy-lite-3b", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use convergence-ai/proxy-lite-3b with Docker Model Runner:
docker model run hf.co/convergence-ai/proxy-lite-3b
add AIBOM
Dear convergence-ai,
We are a group of researchers investigating the usefulness of sharing AIBOMs (Artificial Intelligence Bill of Materials) to document AI models and to improve transparency in AI model supply chains. AIBOMs are machine-readable, structured inventories of components—such as datasets and models—used in the development of AI-powered systems.
We would like to emphasize that we have no financial or competing interests related to AIBOMs. Our sole interest is to advance the collective understanding of AIBOMs within both academia and industry. As part of this effort, we are contributing to randomly selected open and popular models on Hugging Face (like yours) and are happy to offer support to you and the maintainers of your model if needed.
Based on your model card (and some configuration information available in Hugging Face), we generated the AIBOM according to the CyclonDX (v1.6) standard (see https://cyclonedx.org/docs/1.6/json/). This AIBOM is generated as a JSON file by using the following open-source supporting tool: https://github.com/MSR4SBOM/ALOHA (technical details are available in the research paper: https://github.com/MSR4SBOM/ALOHA/blob/main/ALOHA.pdf). This tool is freely available online and can be downloaded and used at your own convenience. We are also happy to assist you directly if you need help generating or reviewing an AIBOM for your model.
The JSON file in this pull request is your AIBOM (see https://github.com/MSR4SBOM/ALOHA/blob/main/documentation.json for details on its structure). Clearly, the submitted AIBOM matches the current model information, yet it can be easily regenerated when the model evolves, using the aforementioned AIBOM generation tool.
We understand that initiatives like ours may raise questions, especially in open communities like Hugging Face. Therefore, we would like to further remark that our interest in AIBOMs is only to enhance the body of knowledge on AIBOMs and to make this easy and low-friction for maintainers of AI models and developers of AI-powered systems.
We open this pull request containing an AIBOM of your AI model, and hope it will be considered. We would also like to hear your opinion on the usefulness (or not) of AIBOM by answering a 3-minute anonymous survey: https://forms.gle/WGffSQD5dLoWttEe7.
Thanks in advance, and regards,
Riccardo D’Avino, Fatima Ahmed, Sabato Nocera, Simone Romano, Giuseppe Scanniello (University of Salerno, Italy),
Massimiliano Di Penta (University of Sannio, Italy),
The MSR4SBOM team
If you’re looking for proxy help, it really depends on what you need it for (scraping, privacy, managing multiple accounts, etc.). I’ve tried a few services over time, and reliability plus speed matter way more than just price. One option worth checking out is proxy24 pro I like that it’s pretty straightforward to set up and doesn’t overload you with confusing settings. The proxies I tested were stable, and support actually replied when I had a question. If you’re new to proxies or just want something that works without headaches, this could be a solid place to start.