Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -11,7 +11,7 @@ base_model: cognitivecomputations/Dolphin3.0-Llama3.1-8B
|
|
| 11 |
pipeline_tag: text-generation
|
| 12 |
---
|
| 13 |
|
| 14 |
-
# nova-8b
|
| 15 |
|
| 16 |
Fine-tuned Dolphin3.0-Llama3.1-8B for cybersecurity tasks.
|
| 17 |
|
|
@@ -38,15 +38,15 @@ Fine-tuned Dolphin3.0-Llama3.1-8B for cybersecurity tasks.
|
|
| 38 |
|
| 39 |
### Ollama
|
| 40 |
```bash
|
| 41 |
-
ollama run pki/nova-8b
|
| 42 |
```
|
| 43 |
|
| 44 |
### Transformers
|
| 45 |
```python
|
| 46 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 47 |
|
| 48 |
-
model = AutoModelForCausalLM.from_pretrained("pki/nova-8b")
|
| 49 |
-
tokenizer = AutoTokenizer.from_pretrained("pki/nova-8b")
|
| 50 |
```
|
| 51 |
|
| 52 |
## Files
|
|
|
|
| 11 |
pipeline_tag: text-generation
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# nova-8b-cybersec
|
| 15 |
|
| 16 |
Fine-tuned Dolphin3.0-Llama3.1-8B for cybersecurity tasks.
|
| 17 |
|
|
|
|
| 38 |
|
| 39 |
### Ollama
|
| 40 |
```bash
|
| 41 |
+
ollama run pki/nova-8b-cybersec
|
| 42 |
```
|
| 43 |
|
| 44 |
### Transformers
|
| 45 |
```python
|
| 46 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 47 |
|
| 48 |
+
model = AutoModelForCausalLM.from_pretrained("pki/nova-8b-cybersec")
|
| 49 |
+
tokenizer = AutoTokenizer.from_pretrained("pki/nova-8b-cybersec")
|
| 50 |
```
|
| 51 |
|
| 52 |
## Files
|