Update README.md
Browse files
README.md
CHANGED
|
@@ -33,25 +33,6 @@ The model was finetuned using the Unsloth library, leveraging its efficient trai
|
|
| 33 |
- **Language**: English (`en`)
|
| 34 |
- **License**: Apache-2.0
|
| 35 |
|
| 36 |
-
## Usage
|
| 37 |
-
|
| 38 |
-
### Loading the Model
|
| 39 |
-
|
| 40 |
-
You can load the model and tokenizer using the following code snippet:
|
| 41 |
-
|
| 42 |
-
```python
|
| 43 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 44 |
-
|
| 45 |
-
# Load the tokenizer and model
|
| 46 |
-
tokenizer = AutoTokenizer.from_pretrained("inetnuc/llama-3-8b-chat-nuclear")
|
| 47 |
-
model = AutoModelForCausalLM.from_pretrained("inetnuc/llama-3-8b-chat-nuclear")
|
| 48 |
-
|
| 49 |
-
# Example of generating text
|
| 50 |
-
inputs = tokenizer("what is the iaea approach for cyber security?", return_tensors="pt")
|
| 51 |
-
outputs = model.generate(**inputs, max_new_tokens=128)
|
| 52 |
-
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 53 |
-
|
| 54 |
-
|
| 55 |
## Files and Versions
|
| 56 |
|
| 57 |
| File Name | Description |
|
|
@@ -81,3 +62,25 @@ MUSTAFA UMUT OZBEK
|
|
| 81 |
## Contact
|
| 82 |
https://www.linkedin.com/in/mustafaumutozbek/
|
| 83 |
https://x.com/m_umut_ozbek
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
- **Language**: English (`en`)
|
| 34 |
- **License**: Apache-2.0
|
| 35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
## Files and Versions
|
| 37 |
|
| 38 |
| File Name | Description |
|
|
|
|
| 62 |
## Contact
|
| 63 |
https://www.linkedin.com/in/mustafaumutozbek/
|
| 64 |
https://x.com/m_umut_ozbek
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
## Usage
|
| 68 |
+
|
| 69 |
+
### Loading the Model
|
| 70 |
+
|
| 71 |
+
You can load the model and tokenizer using the following code snippet:
|
| 72 |
+
|
| 73 |
+
```python
|
| 74 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 75 |
+
|
| 76 |
+
# Load the tokenizer and model
|
| 77 |
+
tokenizer = AutoTokenizer.from_pretrained("inetnuc/llama-3-8b-chat-nuclear")
|
| 78 |
+
model = AutoModelForCausalLM.from_pretrained("inetnuc/llama-3-8b-chat-nuclear")
|
| 79 |
+
|
| 80 |
+
# Example of generating text
|
| 81 |
+
inputs = tokenizer("what is the iaea approach for cyber security?", return_tensors="pt")
|
| 82 |
+
outputs = model.generate(**inputs, max_new_tokens=128)
|
| 83 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
|