Instructions to use m-a-p/Kun-PrimaryChatModel with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use m-a-p/Kun-PrimaryChatModel with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="m-a-p/Kun-PrimaryChatModel", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("m-a-p/Kun-PrimaryChatModel", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Ctrl+K
- 1.52 kB
- 2.47 kB
- 167 Bytes
- 811 Bytes
- 5.47 kB
- 132 Bytes
- 4.79 GB xet
- 4.76 GB xet
- 4.99 GB xet
- 4.76 GB xet
- 4.76 GB xet
- 4.99 GB xet
- 4.76 GB xet
- 4.76 GB xet
- 4.99 GB xet
- 4.76 GB xet
- 4.76 GB xet
- 4.99 GB xet
- 4.76 GB xet
- 4.76 GB xet
- 1.21 GB xet
- 42.8 kB
- 40.8 kB
- 569 Bytes
- 8.96 kB
- 1.03 MB xet
- 1.01 kB
- 167 Bytes
- 243 kB
- 59.4 kB
- 6.46 kB xet
- 43.1 kB