Update README.md
Browse files
README.md
CHANGED
|
@@ -11,10 +11,12 @@ Rapnss Innovation: A unique AI built to inspire creativity and productivity.
|
|
| 11 |
|
| 12 |
Usage
|
| 13 |
Get started with VIA-01 using the following Python code:
|
|
|
|
| 14 |
from transformers import pipeline
|
| 15 |
import torch
|
| 16 |
|
| 17 |
# Initialize the pipeline
|
|
|
|
| 18 |
pipe = pipeline(
|
| 19 |
"text-generation",
|
| 20 |
model="rapnss/VIA-01",
|
|
@@ -27,19 +29,21 @@ pipe = pipeline(
|
|
| 27 |
prompt = "Write a Python function to sort a list:"
|
| 28 |
response = pipe(prompt)[0]['generated_text']
|
| 29 |
print(response)
|
| 30 |
-
|
| 31 |
Example Output:
|
|
|
|
| 32 |
Write a Python function to sort a list:
|
| 33 |
-
|
| 34 |
def sort_list(arr):
|
| 35 |
return sorted(arr)
|
| 36 |
|
| 37 |
-
|
|
|
|
| 38 |
## Installation
|
| 39 |
Install required dependencies:
|
| 40 |
```bash
|
| 41 |
pip install transformers torch accelerate gradio
|
| 42 |
-
|
| 43 |
Performance
|
| 44 |
|
| 45 |
Inference Speed: Optimized for low-latency responses, typically ~20-40 seconds on standard CPU hardware (e.g., Hugging Face free Space). For sub-10-second responses, use a GPU-enabled environment (e.g., Hugging Face Pro Space).
|
|
|
|
| 11 |
|
| 12 |
Usage
|
| 13 |
Get started with VIA-01 using the following Python code:
|
| 14 |
+
```python
|
| 15 |
from transformers import pipeline
|
| 16 |
import torch
|
| 17 |
|
| 18 |
# Initialize the pipeline
|
| 19 |
+
|
| 20 |
pipe = pipeline(
|
| 21 |
"text-generation",
|
| 22 |
model="rapnss/VIA-01",
|
|
|
|
| 29 |
prompt = "Write a Python function to sort a list:"
|
| 30 |
response = pipe(prompt)[0]['generated_text']
|
| 31 |
print(response)
|
| 32 |
+
```
|
| 33 |
Example Output:
|
| 34 |
+
```
|
| 35 |
Write a Python function to sort a list:
|
| 36 |
+
|
| 37 |
def sort_list(arr):
|
| 38 |
return sorted(arr)
|
| 39 |
|
| 40 |
+
```
|
| 41 |
+
```
|
| 42 |
## Installation
|
| 43 |
Install required dependencies:
|
| 44 |
```bash
|
| 45 |
pip install transformers torch accelerate gradio
|
| 46 |
+
```
|
| 47 |
Performance
|
| 48 |
|
| 49 |
Inference Speed: Optimized for low-latency responses, typically ~20-40 seconds on standard CPU hardware (e.g., Hugging Face free Space). For sub-10-second responses, use a GPU-enabled environment (e.g., Hugging Face Pro Space).
|