Update README.md
Browse files
README.md
CHANGED
|
@@ -1,11 +1,4 @@
|
|
| 1 |
---
|
| 2 |
-
Linux_Mac_Instructions:
|
| 3 |
-
- Download the llamafile
|
| 4 |
-
- Open a terminal
|
| 5 |
-
- Navigate to download
|
| 6 |
-
- Type './navi.llamafile'
|
| 7 |
-
Linux_Mac_WebUI:
|
| 8 |
-
- Same as above only the command is `./navi.llamafile --server --v2'
|
| 9 |
language:
|
| 10 |
- en
|
| 11 |
base_model:
|
|
@@ -16,4 +9,28 @@ tags:
|
|
| 16 |
- code
|
| 17 |
license: mit
|
| 18 |
pipeline_tag: text-generation
|
| 19 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
base_model:
|
|
|
|
| 9 |
- code
|
| 10 |
license: mit
|
| 11 |
pipeline_tag: text-generation
|
| 12 |
+
---
|
| 13 |
+
# Navi
|
| 14 |
+
A high-performance, uncensored language model fine-tuned for cybersecurity applications.
|
| 15 |
+
|
| 16 |
+
## Table of Contents
|
| 17 |
+
- [Model Details](#model-details)
|
| 18 |
+
- [Usage](#usage)
|
| 19 |
+
- [Linux/Mac Instructions](#linuxmac-instructions)
|
| 20 |
+
- [Web UI](#web-ui)
|
| 21 |
+
|
| 22 |
+
## Model Details
|
| 23 |
+
This model is built upon [bartowski/Llama-3.2-3B-Instruct-uncensored-GGUF](https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-uncensored-GGUF), leveraging its capabilities for text generation in the cybersecurity domain.
|
| 24 |
+
|
| 25 |
+
## Usage
|
| 26 |
+
|
| 27 |
+
### Linux/Mac Instructions
|
| 28 |
+
To run the model locally:
|
| 29 |
+
1. Download the llamafile.
|
| 30 |
+
2. Open a terminal and navigate to the download directory.
|
| 31 |
+
3. Run the model using `./navi.llamafile`.
|
| 32 |
+
|
| 33 |
+
### Web UI
|
| 34 |
+
For a web interface:
|
| 35 |
+
1. Follow the steps above.
|
| 36 |
+
2. Run the model with `./navi.llamafile --server --v2`.
|