File size: 5,630 Bytes
84abcb7 3269212 84abcb7 3269212 84abcb7 3269212 84abcb7 3269212 84abcb7 3269212 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
---
language:
- en
- vi
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: meta-llama/Meta-Llama-3-8B-Instruct
pipeline_tag: text-generation
---
# Uploaded model
- **Developed by:** dad1909 (Huynh Dac Tan Dat)
- **License:** RMIT
# Model Card for dad1909/CyberSentinel
This repo contains 4-bit quantized (using bitsandbytes) model of Meta's Meta-Llama-3-8B-Instruct
# Model Details
- ** Model creator: Meta
- ** Original model: Meta-Llama-3-8B-Instruct
# Code running in google colab using text_streamer (Recommend):
```
%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps xformers trl peft accelerate bitsandbytes
```
```
# Uninstall and reinstall xformers with CUDA support
!pip uninstall -y xformers
!pip install xformers[cuda]
```
```python
from unsloth import FastLanguageModel
import torch
from transformers import TextStreamer
max_seq_length = 1028 # Choose any! We auto support RoPE Scaling internally!
dtype = torch.float16 # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="dad1909/CyberSentinel",
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit
)
alpaca_prompt = """Below is a code snippet. Identify the line of code that is vulnerable and describe the type of software vulnerability.
### Code Snippet:
{}
### Vulnerability Description:
{}"""
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"import sqlite3\n\ndef create_table():\n conn = sqlite3.connect(':memory:')\n c = conn.cursor()\n c.execute('''CREATE TABLE users (id INTEGER PRIMARY KEY, username TEXT, password TEXT)''')\n c.execute(\"INSERT INTO users (username, password) VALUES ('user1', 'pass1')\")\n c.execute(\"INSERT INTO users (username, password) VALUES ('user2', 'pass2')\")\n conn.commit()\n return conn\n\ndef vulnerable_query(conn, username):\n c = conn.cursor()\n query = f\"SELECT * FROM users WHERE username = '{username}'\"\n print(f\"Executing query: {query}\")\n c.execute(query)\n return c.fetchall()\n\n# Create a database and a table\nconn = create_table()\n\n# Simulate a user input with SQL injection\nuser_input = \"' OR '1'='1\"\nresults = vulnerable_query(conn, user_input)\n\n# Print the results\nprint(\"Results of the query:\")\nfor row in results:\n print(row)\n\n# Close the connection\nconn.close()\n", # instruction
"",
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)
```
#### Install using Transformers pipeline and Transformers AutoModelForCausalLM
```python
!pip install transformers
!pip install torch
!pip install accelerate
```
#### Transformers pipeline and
```python
import transformers
import torch
model_id = "dad1909/CyberSentinel"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a chatbot who always responds for detect software vulnerable code!"},
{"role": "user", "content": "what is Buffer overflow?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "dad1909/CyberSentinel"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a chatbot who always responds for detect software vulnerable code!"},
{"role": "user", "content": "what is Buffer overflow?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
## Training Data
**Overview** cyberAI is pretrained from dad1909/DSV that data related to software vulnerability codes. The fine-tuning data includes publicly available instruction and output datasets.
**Data Freshness** The pretraining data is continuously updated with new vulnerability codes. |