Instructions to use 8bit-coder/alpaca-7b-nativeEnhanced with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Adapters
How to use 8bit-coder/alpaca-7b-nativeEnhanced with Adapters:
from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("undefined") model.load_adapter("8bit-coder/alpaca-7b-nativeEnhanced", set_active=True) - Notebooks
- Google Colab
- Kaggle
Commit ·
43b83de
1
Parent(s): 4634358
Update README.md
Browse files
README.md
CHANGED
|
@@ -11,7 +11,7 @@ library_name: adapter-transformers
|
|
| 11 |
<h1 align="center">
|
| 12 |
Alpaca 7B Native Enhanced
|
| 13 |
</h1>
|
| 14 |
-
<p align="center">The Most Advanced Alpaca 7B Model
|
| 15 |
|
| 16 |
## 📃 Model Facts
|
| 17 |
- Trained natively on 8x Nvidia A100 40GB GPUs; no LoRA used
|
|
|
|
| 11 |
<h1 align="center">
|
| 12 |
Alpaca 7B Native Enhanced
|
| 13 |
</h1>
|
| 14 |
+
<p align="center">The Most Advanced Alpaca 7B Model</p>
|
| 15 |
|
| 16 |
## 📃 Model Facts
|
| 17 |
- Trained natively on 8x Nvidia A100 40GB GPUs; no LoRA used
|