Instructions to use llm-llm-llm-llm/BIT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use llm-llm-llm-llm/BIT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="llm-llm-llm-llm/BIT")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("llm-llm-llm-llm/BIT", dtype="auto") - Notebooks
- Google Colab
- Kaggle
README.md exists but content is empty.